Sample records for numerical method called

  1. A stable numerical solution method in-plane loading of nonlinear viscoelastic laminated orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1989-01-01

    In response to the tremendous growth in the development of advanced materials, such as fiber-reinforced plastic (FRP) composite materials, a new numerical method is developed to analyze and predict the time-dependent properties of these materials. Basic concepts in viscoelasticity, laminated composites, and previous viscoelastic numerical methods are presented. A stable numerical method, called the nonlinear differential equation method (NDEM), is developed to calculate the in-plane stresses and strains over any time period for a general laminate constructed from nonlinear viscoelastic orthotropic plies. The method is implemented in an in-plane stress analysis computer program, called VCAP, to demonstrate its usefulness and to verify its accuracy. A number of actual experimental test results performed on Kevlar/epoxy composite laminates are compared to predictions calculated from the numerical method.

  2. Pricing index-based catastrophe bonds: Part 1: Formulation and discretization issues using a numerical PDE approach

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the first installment in a two-part series, and focuses on the development of a numerical PDE approach to price components of a Bermudan-style callable catastrophe (CAT) bond. The bond is based on two underlying stochastic variables; the PCS index which posts quarterly estimates of industry-wide hurricane losses as well as a single-factor CIR interest rate model for the three-month LIBOR. The aggregate PCS index is analogous to losses claimed under traditional reinsurance in that it is used to specify a reinsurance layer. The proposed CAT bond model contains a Bermudan-style call feature designed to allow the reinsurer to minimize their interest rate risk exposure on making substantial fixed coupon payments using capital from the reinsurance premium. Numerical PDE methods are the fundamental strategy for pricing early-exercise constraints, such as the Bermudan-style call feature, into contingent claim models. Therefore, the objective and unique contribution of this first installment in the two-part series is to develop a formulation and discretization strategy for the proposed CAT bond model utilizing a numerical PDE approach. Object-oriented code design is fundamental to the numerical methods used to aggregate the PCS index, and implement the call feature. Therefore, object-oriented design issues that relate specifically to the development of a numerical PDE approach for the component of the proposed CAT bond model that depends on the PCS index and LIBOR are described here. Formulation, numerical methods and code design issues that relate to aggregating the PCS index and introducing the call option are the subject of the companion paper.

  3. Projection methods for line radiative transfer in spherical media.

    NASA Astrophysics Data System (ADS)

    Anusha, L. S.; Nagendra, K. N.

    An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).

  4. An Unconditionally Stable, Positivity-Preserving Splitting Scheme for Nonlinear Black-Scholes Equation with Transaction Costs

    PubMed Central

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable. PMID:24895653

  5. An unconditionally stable, positivity-preserving splitting scheme for nonlinear Black-Scholes equation with transaction costs.

    PubMed

    Guo, Jianqiang; Wang, Wansheng

    2014-01-01

    This paper deals with the numerical analysis of nonlinear Black-Scholes equation with transaction costs. An unconditionally stable and monotone splitting method, ensuring positive numerical solution and avoiding unstable oscillations, is proposed. This numerical method is based on the LOD-Backward Euler method which allows us to solve the discrete equation explicitly. The numerical results for vanilla call option and for European butterfly spread are provided. It turns out that the proposed scheme is efficient and reliable.

  6. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  7. Evaluating Blended and Flipped Instruction in Numerical Methods at Multiple Engineering Schools

    ERIC Educational Resources Information Center

    Clark, Renee; Kaw, Autar; Lou, Yingyan; Scott, Andrew; Besterfield-Sacre, Mary

    2018-01-01

    With the literature calling for comparisons among technology-enhanced or active-learning pedagogies, a blended versus flipped instructional comparison was made for numerical methods coursework using three engineering schools with diverse student demographics. This study contributes to needed comparisons of enhanced instructional approaches in STEM…

  8. An improved numerical method to compute neutron/gamma deexcitation cascades starting from a high spin state

    DOE PAGES

    Regnier, D.; Litaize, O.; Serot, O.

    2015-12-23

    Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less

  9. Numerical analysis on the cutting and finishing efficiency of MRAFF process

    NASA Astrophysics Data System (ADS)

    Lih, F. L.

    2016-03-01

    The aim of the present research is to conduct a numerical study of the characteristic of a two-phase magnetorheological fluid with different operation conditions by the finite volume method called SIMPLE with an add-on MHD code.

  10. Numerical solution of stiff systems of ordinary differential equations with applications to electronic circuits

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1971-01-01

    Systems of ordinary differential equations in which the magnitudes of the eigenvalues (or time constants) vary greatly are commonly called stiff. Such systems of equations arise in nuclear reactor kinetics, the flow of chemically reacting gas, dynamics, control theory, circuit analysis and other fields. The research reported develops an A-stable numerical integration technique for solving stiff systems of ordinary differential equations. The method, which is called the generalized trapezoidal rule, is a modification of the trapezoidal rule. However, the method is computationally more efficient than the trapezoidal rule when the solution of the almost-discontinuous segments is being calculated.

  11. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  12. Features in simulation of crystal growth using the hyperbolic PFC equation and the dependence of the numerical solution on the parameters of the computational grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starodumov, Ilya; Kropotin, Nikolai

    2016-08-10

    We investigate the three-dimensional mathematical model of crystal growth called PFC (Phase Field Crystal) in a hyperbolic modification. This model is also called the modified model PFC (originally PFC model is formulated in parabolic form) and allows to describe both slow and rapid crystallization processes on atomic length scales and on diffusive time scales. Modified PFC model is described by the differential equation in partial derivatives of the sixth order in space and second order in time. The solution of this equation is possible only by numerical methods. Previously, authors created the software package for the solution of the Phasemore » Field Crystal problem, based on the method of isogeometric analysis (IGA) and PetIGA program library. During further investigation it was found that the quality of the solution can strongly depends on the discretization parameters of a numerical method. In this report, we show the features that should be taken into account during constructing the computational grid for the numerical simulation.« less

  13. On Everhart Method

    NASA Astrophysics Data System (ADS)

    Pârv, Bazil

    This paper deals with the Everhart numerical integration method, a well-known method in astronomical research. This method, a single-step one, is widely used for numerical integration of motion equation of celestial bodies. For an integration step, this method uses unequally-spaced substeps, defined by the roots of the so-called generating polynomial of Everhart's method. For this polynomial, this paper proposes and proves new recurrence formulae. The Maple computer algebra system was used to find and prove these formulae. Again, Maple seems to be well suited and easy to use in mathematical research.

  14. Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.

    1981-01-01

    Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.

  15. Finite Volume Method for Pricing European Call Option with Regime-switching Volatility

    NASA Astrophysics Data System (ADS)

    Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM

    2018-03-01

    In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.

  16. Numerical simulations of the charged-particle flow dynamics for sources with a curved emission surface

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.

    2016-12-01

    The implementation of numerical methods for studying the dynamics of particle flows produced by pulsed sources is discussed. A particle tracking method with so-called gun iteration for simulations of beam dynamics is used. For the space charge limited emission problem, we suggest a Gauss law emission model for precise current-density calculation in the case of a curvilinear emitter. The results of numerical simulations of particle-flow formation for cylindrical bipolar diode and for diode with elliptical emitter are presented.

  17. Vectorization on the star computer of several numerical methods for a fluid flow problem

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  18. The Contact Dynamics method: A nonsmooth story

    NASA Astrophysics Data System (ADS)

    Dubois, Frédéric; Acary, Vincent; Jean, Michel

    2018-03-01

    When velocity jumps are occurring, the dynamics is said to be nonsmooth. For instance, in collections of contacting rigid bodies, jumps are caused by shocks and dry friction. Without compliance at the interface, contact laws are not only non-differentiable in the usual sense but also multi-valued. Modeling contacting bodies is of interest in order to understand the behavior of numerous mechanical systems such as flexible multi-body systems, granular materials or masonry. These granular materials behave puzzlingly either like a solid or a fluid and a description in the frame of classical continuous mechanics would be welcome though far to be satisfactory nowadays. Jean-Jacques Moreau greatly contributed to convex analysis, functions of bounded variations, differential measure theory, sweeping process theory, definitive mathematical tools to deal with nonsmooth dynamics. He converted all these underlying theoretical ideas into an original nonsmooth implicit numerical method called Contact Dynamics (CD); a robust and efficient method to simulate large collections of bodies with frictional contacts and impacts. The CD method offers a very interesting complementary alternative to the family of smoothed explicit numerical methods, often called Distinct Elements Method (DEM). In this paper developments and improvements of the CD method are presented together with a critical comparative review of advantages and drawbacks of both approaches. xml:lang="fr"

  19. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  20. Parameter estimation in IMEX-trigonometrically fitted methods for the numerical solution of reaction-diffusion problems

    NASA Astrophysics Data System (ADS)

    D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice

    2018-05-01

    In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.

  1. Generalized Birkhoffian representation of nonholonomic systems and its discrete variational algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Shixing; Liu, Chang; Hua, Wei; Guo, Yongxin

    2016-11-01

    By using the discrete variational method, we study the numerical method of the general nonholonomic system in the generalized Birkhoffian framework, and construct a numerical method of generalized Birkhoffian equations called a self-adjoint-preserving algorithm. Numerical results show that it is reasonable to study the nonholonomic system by the structure-preserving algorithm in the generalized Birkhoffian framework. Project supported by the National Natural Science Foundation of China (Grant Nos. 11472124, 11572145, 11202090, and 11301350), the Doctor Research Start-up Fund of Liaoning Province, China (Grant No. 20141050), the China Postdoctoral Science Foundation (Grant No. 2014M560203), and the General Science and Technology Research Plans of Liaoning Educational Bureau, China (Grant No. L2013005).

  2. Influence of Installation Effects on Pile Bearing Capacity in Cohesive Soils - Large Deformation Analysis Via Finite Element Method

    NASA Astrophysics Data System (ADS)

    Konkol, Jakub; Bałachowski, Lech

    2017-03-01

    In this paper, the whole process of pile construction and performance during loading is modelled via large deformation finite element methods such as Coupled Eulerian Lagrangian (CEL) and Updated Lagrangian (UL). Numerical study consists of installation process, consolidation phase and following pile static load test (SLT). The Poznań site is chosen as the reference location for the numerical analysis, where series of pile SLTs have been performed in highly overconsolidated clay (OCR ≈ 12). The results of numerical analysis are compared with corresponding field tests and with so-called "wish-in-place" numerical model of pile, where no installation effects are taken into account. The advantages of using large deformation numerical analysis are presented and its application to the pile designing is shown.

  3. TURNS - A free-wake Euler/Navier-Stokes numerical method for helicopter rotors

    NASA Technical Reports Server (NTRS)

    Srinivasan, G. R.; Baeder, J. D.

    1993-01-01

    Computational capabilities of a numerical procedure, called TURNS (transonic unsteady rotor Navier-Stokes), to calculate the aerodynamics and acoustics (high-speed impulsive noise) out to several rotor diameters are summarized. The procedure makes it possible to obtain the aerodynamics and acoustics information in one single calculation. The vortical wave and its influence, as well as the acoustics, are captured as part of the overall flowfield solution. The accuracy and suitability of the TURNS method is demonstrated through comparisons with experimental data.

  4. John Butcher and hybrid methods

    NASA Astrophysics Data System (ADS)

    Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif

    2017-07-01

    As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.

  5. Efficient numerical method for solving Cauchy problem for the Gamma equation

    NASA Astrophysics Data System (ADS)

    Koleva, Miglena N.

    2011-12-01

    In this work we consider Cauchy problem for the so called Gamma equation, derived by transforming the fully nonlinear Black-Scholes equation for option price into a quasilinear parabolic equation for the second derivative (Greek) Γ = VSS of the option price V. We develop an efficient numerical method for solving the model problem concerning different volatility terms. Using suitable change of variables the problem is transformed on finite interval, keeping original behavior of the solution at the infinity. Then we construct Picard-Newton algorithm with adaptive mesh step in time, which can be applied also in the case of non-differentiable functions. Results of numerical simulations are given.

  6. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  7. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  8. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  9. Numerical method for the solution of large systems of differential equations of the boundary layer type

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Nachtsheim, P. R.

    1972-01-01

    A numerical method for the solution of large systems of nonlinear differential equations of the boundary-layer type is described. The method is a modification of the technique for satisfying asymptotic boundary conditions. The present method employs inverse interpolation instead of the Newton method to adjust the initial conditions of the related initial-value problem. This eliminates the so-called perturbation equations. The elimination of the perturbation equations not only reduces the user's preliminary work in the application of the method, but also reduces the number of time-consuming initial-value problems to be numerically solved at each iteration. For further ease of application, the solution of the overdetermined system for the unknown initial conditions is obtained automatically by applying Golub's linear least-squares algorithm. The relative ease of application of the proposed numerical method increases directly as the order of the differential-equation system increases. Hence, the method is especially attractive for the solution of large-order systems. After the method is described, it is applied to a fifth-order problem from boundary-layer theory.

  10. A second-order accurate kinetic-theory-based method for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, Suresh M.

    1986-01-01

    An upwind method for the numerical solution of the Euler equations is presented. This method, called the kinetic numerical method (KNM), is based on the fact that the Euler equations are moments of the Boltzmann equation of the kinetic theory of gases when the distribution function is Maxwellian. The KNM consists of two phases, the convection phase and the collision phase. The method is unconditionally stable and explicit. It is highly vectorizable and can be easily made total variation diminishing for the distribution function by a suitable choice of the interpolation strategy. The method is applied to a one-dimensional shock-propagation problem and to a two-dimensional shock-reflection problem.

  11. A moving mesh finite difference method for equilibrium radiation diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaobo, E-mail: xwindyb@126.com; Huang, Weizhang, E-mail: whuang@ku.edu; Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn

    2015-10-01

    An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativitymore » of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.« less

  12. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  13. Legendre-tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  14. Legendre-Tau approximations for functional differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1983-01-01

    The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.

  15. MUSTA fluxes for systems of conservation laws

    NASA Astrophysics Data System (ADS)

    Toro, E. F.; Titarev, V. A.

    2006-08-01

    This paper is about numerical fluxes for hyperbolic systems and we first present a numerical flux, called GFORCE, that is a weighted average of the Lax-Friedrichs and Lax-Wendroff fluxes. For the linear advection equation with constant coefficient, the new flux reduces identically to that of the Godunov first-order upwind method. Then we incorporate GFORCE in the framework of the MUSTA approach [E.F. Toro, Multi-Stage Predictor-Corrector Fluxes for Hyperbolic Equations. Technical Report NI03037-NPA, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, UK, 17th June, 2003], resulting in a version that we call GMUSTA. For non-linear systems this gives results that are comparable to those of the Godunov method in conjunction with the exact Riemann solver or complete approximate Riemann solvers, noting however that in our approach, the solution of the Riemann problem in the conventional sense is avoided. Both the GFORCE and GMUSTA fluxes are extended to multi-dimensional non-linear systems in a straightforward unsplit manner, resulting in linearly stable schemes that have the same stability regions as the straightforward multi-dimensional extension of Godunov's method. The methods are applicable to general meshes. The schemes of this paper share with the family of centred methods the common properties of being simple and applicable to a large class of hyperbolic systems, but the schemes of this paper are distinctly more accurate. Finally, we proceed to the practical implementation of our numerical fluxes in the framework of high-order finite volume WENO methods for multi-dimensional non-linear hyperbolic systems. Numerical results are presented for the Euler equations and for the equations of magnetohydrodynamics.

  16. Investigating the Effects of a Math-Enhanced Agricultural Teaching Methods Course

    ERIC Educational Resources Information Center

    Stripling, Christopher T.; Roberts, T. Grady

    2013-01-01

    Numerous calls have been made for agricultural education to support core academic subject matter including mathematics. Previous research has shown that the incorporation of mathematics content into a teaching methods course had a positive effect on preservice teachers' mathematics content knowledge. The purpose of this study was to investigate…

  17. CONDIF - A modified central-difference scheme for convective flows

    NASA Technical Reports Server (NTRS)

    Runchal, Akshai K.

    1987-01-01

    The paper presents a method, called CONDIF, which modifies the CDS (central-difference scheme) by introducing a controlled amount of numerical diffusion based on the local gradients. The numerical diffusion can be adjusted to be negligibly low for most problems. CONDIF results are significantly more accurate than those obtained from the hybrid scheme when the Peclet number is very high and the flow is at large angles to the grid.

  18. Encouraging Teacher Change within the Realities of School-Based Agricultural Education: Lessons from Teachers' Initial Use of Socioscientific Issues-Based Instruction

    ERIC Educational Resources Information Center

    Wilcox, Amie K.; Shoulders, Catherine W.; Myers, Brian E.

    2014-01-01

    Calls for increased interdisciplinary education have led to the development of numerous teaching methods designed to help teachers provide meaningful experiences for their students. However, methods of guiding teachers in the successful adoption of innovative teaching methods are not firmly set. This qualitative study sought to better understand…

  19. Numerical algebraic geometry: a new perspective on gauge and string theories

    NASA Astrophysics Data System (ADS)

    Mehta, Dhagash; He, Yang-Hui; Hauensteine, Jonathan D.

    2012-07-01

    There is a rich interplay between algebraic geometry and string and gauge theories which has been recently aided immensely by advances in computational algebra. However, symbolic (Gröbner) methods are severely limited by algorithmic issues such as exponential space complexity and being highly sequential. In this paper, we introduce a novel paradigm of numerical algebraic geometry which in a plethora of situations overcomes these shortcomings. The so-called `embarrassing parallelizability' allows us to solve many problems and extract physical information which elude symbolic methods. We describe the method and then use it to solve various problems arising from physics which could not be otherwise solved.

  20. Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation

    DOE PAGES

    Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...

    1995-01-01

    In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less

  1. Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Ren, Wei; Liu, Hong; Jin, Shi

    2014-12-01

    In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.

  2. Double Diffusive Magnetohydrodynamic (MHD) Mixed Convective Slip Flow along a Radiating Moving Vertical Flat Plate with Convective Boundary Condition

    PubMed Central

    Rashidi, Mohammad M.; Kavyani, Neda; Abelman, Shirley; Uddin, Mohammed J.; Freidoonimehr, Navid

    2014-01-01

    In this study combined heat and mass transfer by mixed convective flow along a moving vertical flat plate with hydrodynamic slip and thermal convective boundary condition is investigated. Using similarity variables, the governing nonlinear partial differential equations are converted into a system of coupled nonlinear ordinary differential equations. The transformed equations are then solved using a semi-numerical/analytical method called the differential transform method and results are compared with numerical results. Close agreement is found between the present method and the numerical method. Effects of the controlling parameters, including convective heat transfer, magnetic field, buoyancy ratio, hydrodynamic slip, mixed convective, Prandtl number and Schmidt number are investigated on the dimensionless velocity, temperature and concentration profiles. In addition effects of different parameters on the skin friction factor, , local Nusselt number, , and local Sherwood number are shown and explained through tables. PMID:25343360

  3. A Novel Polygonal Finite Element Method: Virtual Node Method

    NASA Astrophysics Data System (ADS)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  4. Minimizing Higgs potentials via numerical polynomial homotopy continuation

    NASA Astrophysics Data System (ADS)

    Maniatis, M.; Mehta, D.

    2012-08-01

    The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.

  5. QMR: A Quasi-Minimal Residual method for non-Hermitian linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Nachtigal, Noel M.

    1990-01-01

    The biconjugate gradient (BCG) method is the natural generalization of the classical conjugate gradient algorithm for Hermitian positive definite matrices to general non-Hermitian linear systems. Unfortunately, the original BCG algorithm is susceptible to possible breakdowns and numerical instabilities. A novel BCG like approach is presented called the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.

  6. Numerical detection of the Gardner transition in a mean-field glass former.

    PubMed

    Charbonneau, Patrick; Jin, Yuliang; Parisi, Giorgio; Rainone, Corrado; Seoane, Beatriz; Zamponi, Francesco

    2015-07-01

    Recent theoretical advances predict the existence, deep into the glass phase, of a novel phase transition, the so-called Gardner transition. This transition is associated with the emergence of a complex free energy landscape composed of many marginally stable sub-basins within a glass metabasin. In this study, we explore several methods to detect numerically the Gardner transition in a simple structural glass former, the infinite-range Mari-Kurchan model. The transition point is robustly located from three independent approaches: (i) the divergence of the characteristic relaxation time, (ii) the divergence of the caging susceptibility, and (iii) the abnormal tail in the probability distribution function of cage order parameters. We show that the numerical results are fully consistent with the theoretical expectation. The methods we propose may also be generalized to more realistic numerical models as well as to experimental systems.

  7. Flexible scheme to truncate the hierarchy of pure states.

    PubMed

    Zhang, P-P; Bentley, C D B; Eisfeld, A

    2018-04-07

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  8. Flexible scheme to truncate the hierarchy of pure states

    NASA Astrophysics Data System (ADS)

    Zhang, P.-P.; Bentley, C. D. B.; Eisfeld, A.

    2018-04-01

    The hierarchy of pure states (HOPS) is a wavefunction-based method that can be used for numerically modeling open quantum systems. Formally, HOPS recovers the exact system dynamics for an infinite depth of the hierarchy. However, truncation of the hierarchy is required to numerically implement HOPS. We want to choose a "good" truncation method, where by "good" we mean that it is numerically feasible to check convergence of the results. For the truncation approximation used in previous applications of HOPS, convergence checks are numerically challenging. In this work, we demonstrate the application of the "n-particle approximation" to HOPS. We also introduce a new approximation, which we call the "n-mode approximation." We then explore the convergence of these truncation approximations with respect to the number of equations required in the hierarchy in two exemplary problems: absorption and energy transfer of molecular aggregates.

  9. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.

    1986-01-01

    The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.

  10. Numerical simulation of self-sustained oscillation of a voice-producing element based on Navier-Stokes equations and the finite element method.

    PubMed

    de Vries, Martinus P; Hamburg, Marc C; Schutte, Harm K; Verkerke, Gijsbertus J; Veldman, Arthur E P

    2003-04-01

    Surgical removal of the larynx results in radically reduced production of voice and speech. To improve voice quality a voice-producing element (VPE) is developed, based on the lip principle, called after the lips of a musician while playing a brass instrument. To optimize the VPE, a numerical model is developed. In this model, the finite element method is used to describe the mechanical behavior of the VPE. The flow is described by two-dimensional incompressible Navier-Stokes equations. The interaction between VPE and airflow is modeled by placing the grid of the VPE model in the grid of the aerodynamical model, and requiring continuity of forces and velocities. By applying and increasing pressure to the numerical model, pulses comparable to glottal volume velocity waveforms are obtained. By variation of geometric parameters their influence can be determined. To validate this numerical model, an in vitro test with a prototype of the VPE is performed. Experimental and numerical results show an acceptable agreement.

  11. A new numerical approximation of the fractal ordinary differential equation

    NASA Astrophysics Data System (ADS)

    Atangana, Abdon; Jain, Sonal

    2018-02-01

    The concept of fractal medium is present in several real-world problems, for instance, in the geological formation that constitutes the well-known subsurface water called aquifers. However, attention has not been quite devoted to modeling for instance, the flow of a fluid within these media. We deem it important to remind the reader that the concept of fractal derivative is not to represent the fractal sharps but to describe the movement of the fluid within these media. Since this class of ordinary differential equations is highly complex to solve analytically, we present a novel numerical scheme that allows to solve fractal ordinary differential equations. Error analysis of the method is also presented. Application of the method and numerical approximation are presented for fractal order differential equation. The stability and the convergence of the numerical schemes are investigated in detail. Also some exact solutions of fractal order differential equations are presented and finally some numerical simulations are presented.

  12. Spatiotemporal Variability and Sound Characterization in Silver Croaker Plagioscion squamosissimus (Sciaenidae) in the Central Amazon

    PubMed Central

    Borie, Alfredo; Mok, Hin-Kiu; Chao, Ning L.; Fine, Michael L.

    2014-01-01

    Background The fish family Sciaenidae has numerous species that produce sounds with superfast muscles that vibrate the swimbladder. These muscles form post embryonically and undergo seasonal hypertrophy-atrophy cycles. The family has been the focus of numerous passive acoustic studies to localize spatial and temporal occurrence of spawning aggregations. Fishes produce disturbance calls when hand-held, and males form aggregations in late afternoon and produce advertisement calls to attract females for mating. Previous studies on five continents have been confined to temperate species. Here we examine the calls of the silver croaker Plagioscion squamosissimus, a freshwater equatorial species, which experiences constant photoperiod, minimal temperature variation but seasonal changes in water depth and color, pH and conductivity. Methods and Principal Findings Dissections indicate that sonic muscles are present exclusively in males and that muscles are thicker and redder during the mating season. Disturbance calls were recorded in hand-held fish during the low-water mating season and high-water period outside of the mating season. Advertisement calls were recorded from wild fish that formed aggregations in both periods but only during the mating season from fish in large cages. Disturbance calls consist of a series of short individual pulses in mature males. Advertisement calls start with single and paired pulses followed by greater amplitude multi-pulse bursts with higher peak frequencies than in disturbance calls. Advertisement-like calls also occur in aggregations during the off season, but bursts are shorter with fewer pulses. Conclusions and Significance Silver croaker produce complex advertisement calls that vary in amplitude, number of cycles per burst and burst duration of their calls. Unlike temperate sciaenids, which only call during the spawning season, silver croaker produce advertisement calls in both seasons. Sonic muscles are thinner, and bursts are shorter than at the spawning peak, but males still produce complex calls outside of the mating season. PMID:25098347

  13. Turbulent Bubbly Flow in a Vertical Pipe Computed By an Eddy-Resolving Reynolds Stress Model

    DTIC Science & Technology

    2014-09-19

    the numerical code OpenFOAM R©. 1 Introduction Turbulent bubbly flows are encountered in many industrially relevant applications, such as chemical in...performed using the OpenFOAM -2.2.2 computational code utilizing a cell- center-based finite volume method on an unstructured numerical grid. The...the mean Courant number is always below 0.4. The utilized turbulence models were implemented into the so-called twoPhaseEulerFoam solver in OpenFOAM , to

  14. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  15. magnum.fe: A micromagnetic finite-element simulation code based on FEniCS

    NASA Astrophysics Data System (ADS)

    Abert, Claas; Exl, Lukas; Bruckner, Florian; Drews, André; Suess, Dieter

    2013-11-01

    We have developed a finite-element micromagnetic simulation code based on the FEniCS package called magnum.fe. Here we describe the numerical methods that are applied as well as their implementation with FEniCS. We apply a transformation method for the solution of the demagnetization-field problem. A semi-implicit weak formulation is used for the integration of the Landau-Lifshitz-Gilbert equation. Numerical experiments show the validity of simulation results. magnum.fe is open source and well documented. The broad feature range of the FEniCS package makes magnum.fe a good choice for the implementation of novel micromagnetic finite-element algorithms.

  16. Stress evaluation in displacement-based 2D nonlocal finite element method

    NASA Astrophysics Data System (ADS)

    Pisano, Aurora Angela; Fuschi, Paolo

    2018-06-01

    The evaluation of the stress field within a nonlocal version of the displacement-based finite element method is addressed. With the aid of two numerical examples it is shown as some spurious oscillations of the computed nonlocal stresses arise at sections (or zones) of macroscopic inhomogeneity of the examined structures. It is also shown how the above drawback, which renders the stress numerical solution unreliable, can be viewed as the so-called locking in FEM, a subject debated in the early seventies. It is proved that a well known remedy for locking, i.e. the reduced integration technique, can be successfully applied also in the nonlocal elasticity context.

  17. Tunable properties of light propagation in photonic liquid crystal fibers

    NASA Astrophysics Data System (ADS)

    Szaniawska, K.; Nasilowski, T.; Woliński, T. R.; Thienpont, H.

    2006-12-01

    Tunable properties of light propagation in photonic crystal fibers filled with liquid crystals, called photonic liquid crystal fibers (PLCFs) are presented. The propagation properties of PLCFs strongly depend on contrast between refractive indices of the solid core (pure silica glass) and liquid crystals (LCs) filing the holes of the fiber. Due to relatively strong thermo-optical effect, we can change the refractive index of the LC by changing its temperature. Numerical analysis of light propagation in PLCF, based on two simulation methods, such as finite difference (FD) and multipole method (MM) is presented. The numerical results obtained are in good agreement with our earlier experimental results presented elsewhere [1].

  18. SDF technology in location and navigation procedures: a survey of applications

    NASA Astrophysics Data System (ADS)

    Kelner, Jan M.; Ziółkowski, Cezary

    2017-04-01

    The basis for development the Doppler location method, also called the signal Doppler frequency (SDF) method or technology is the analytical solution of the wave equation for a mobile source. This paper presents an overview of the simulations, numerical analysis and empirical studies of the possibilities and the range of SDF method applications. In the paper, the various applications from numerous publications are collected and described. They mainly focus on the use of SDF method in: emitter positioning, electronic warfare, crisis management, search and rescue, navigation. The developed method is characterized by an innovative, unique property among other location methods, because it allows the simultaneous location of the many radio emitters. Moreover, this is the first method based on the Doppler effect, which allows positioning of transmitters, using a single mobile platform. In the paper, the results of the using SDF method by the other teams are also presented.

  19. Double diffusive magnetohydrodynamic (MHD) mixed convective slip flow along a radiating moving vertical flat plate with convective boundary condition.

    PubMed

    Rashidi, Mohammad M; Kavyani, Neda; Abelman, Shirley; Uddin, Mohammed J; Freidoonimehr, Navid

    2014-01-01

    In this study combined heat and mass transfer by mixed convective flow along a moving vertical flat plate with hydrodynamic slip and thermal convective boundary condition is investigated. Using similarity variables, the governing nonlinear partial differential equations are converted into a system of coupled nonlinear ordinary differential equations. The transformed equations are then solved using a semi-numerical/analytical method called the differential transform method and results are compared with numerical results. Close agreement is found between the present method and the numerical method. Effects of the controlling parameters, including convective heat transfer, magnetic field, buoyancy ratio, hydrodynamic slip, mixed convective, Prandtl number and Schmidt number are investigated on the dimensionless velocity, temperature and concentration profiles. In addition effects of different parameters on the skin friction factor, [Formula: see text], local Nusselt number, [Formula: see text], and local Sherwood number [Formula: see text] are shown and explained through tables.

  20. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  1. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  3. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  4. A Level-set based framework for viscous simulation of particle-laden supersonic flows

    NASA Astrophysics Data System (ADS)

    Das, Pratik; Sen, Oishik; Jacobs, Gustaaf; Udaykumar, H. S.

    2017-06-01

    Particle-laden supersonic flows are important in natural and industrial processes, such as, volcanic eruptions, explosions, pneumatic conveyance of particle in material processing etc. Numerical study of such high-speed particle laden flows at the mesoscale calls for a numerical framework which allows simulation of supersonic flow around multiple moving solid objects. Only a few efforts have been made toward development of numerical frameworks for viscous simulation of particle-fluid interaction in supersonic flow regime. The current work presents a Cartesian grid based sharp-interface method for viscous simulations of interaction between supersonic flow with moving rigid particles. The no-slip boundary condition is imposed at the solid-fluid interfaces using a modified ghost fluid method (GFM). The current method is validated against the similarity solution of compressible boundary layer over flat-plate and benchmark numerical solution for steady supersonic flow over cylinder. Further validation is carried out against benchmark numerical results for shock induced lift-off of a cylinder in a shock tube. 3D simulation of steady supersonic flow over sphere is performed to compare the numerically obtained drag co-efficient with experimental results. A particle-resolved viscous simulation of shock interaction with a cloud of particles is performed to demonstrate that the current method is suitable for large-scale particle resolved simulations of particle-laden supersonic flows.

  5. Exploring the potential energy landscape over a large parameter-space

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru

    2013-07-01

    Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.

  6. Education and learning: what's on the horizon?

    PubMed

    Pilcher, Jobeth

    2014-01-01

    Numerous organizations have called for significant changes in education for health care professionals. The call has included the need to incorporate evidence-based as well as innovative strategies. Previous articles in this column have focused primarily on evidence-based teaching strategies, including concept mapping, brain-based learning strategies, methods of competency assessment, and so forth. This article shifts the focus to new ways of thinking about knowledge and education. The article will also introduce evolving, innovative, less commonly used learning strategies and provide a peek into the future of learning.

  7. Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems

    DTIC Science & Technology

    2010-02-21

    RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d

  8. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  9. Optimization methods and silicon solar cell numerical models

    NASA Technical Reports Server (NTRS)

    Girardini, K.; Jacobsen, S. E.

    1986-01-01

    An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.

  10. A technique to remove the tensile instability in weakly compressible SPH

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Yu, Peng

    2018-01-01

    When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.

  11. A New Homotopy Perturbation Scheme for Solving Singular Boundary Value Problems Arising in Various Physical Models

    NASA Astrophysics Data System (ADS)

    Roul, Pradip; Warbhe, Ujwal

    2017-08-01

    The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).

  12. Numerical and Experimental Studies on Impact Loaded Concrete Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo

    2006-07-01

    An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less

  13. Nonlinear Schrödinger approach to European option pricing

    NASA Astrophysics Data System (ADS)

    Wróblewski, Marcin

    2017-05-01

    This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.

  14. Methodology of Numerical Optimization for Orbital Parameters of Binary Systems

    NASA Astrophysics Data System (ADS)

    Araya, I.; Curé, M.

    2010-02-01

    The use of a numerical method of maximization (or minimization) in optimization processes allows us to obtain a great amount of solutions. Therefore, we can find a global maximum or minimum of the problem, but this is only possible if we used a suitable methodology. To obtain the global optimum values, we use the genetic algorithm called PIKAIA (P. Charbonneau) and other four algorithms implemented in Mathematica. We demonstrate that derived orbital parameters of binary systems published in some papers, based on radial velocity measurements, are local minimum instead of global ones.

  15. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  16. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  17. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  18. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  19. CSM Testbed Development and Large-Scale Structural Applications

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.

    1989-01-01

    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  20. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  1. Thermodynamics of quantum information scrambling

    NASA Astrophysics Data System (ADS)

    Campisi, Michele; Goold, John

    2017-06-01

    Scrambling of quantum information can conveniently be quantified by so-called out-of-time-order correlators (OTOCs), i.e., correlators of the type <[Wτ,V ] †[Wτ,V ] > , whose measurements present a formidable experimental challenge. Here we report on a method for the measurement of OTOCs based on the so-called two-point measurement scheme developed in the field of nonequilibrium quantum thermodynamics. The scheme is of broader applicability than methods employed in current experiments and provides a clear-cut interpretation of quantum information scrambling in terms of nonequilibrium fluctuations of thermodynamic quantities, such as work and heat. Furthermore, we provide a numerical example on a spin chain which highlights the utility of our thermodynamic approach when understanding the differences between integrable and ergodic behaviors. We also discuss how the method can be used to extend the reach of current experiments.

  2. Heat simulation via Scilab programming

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul

    2014-07-01

    This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.

  3. Hybrid matrix method for stable numerical analysis of the propagation of Dirac electrons in gapless bilayer graphene superlattices

    NASA Astrophysics Data System (ADS)

    Briones-Torres, J. A.; Pernas-Salomón, R.; Pérez-Álvarez, R.; Rodríguez-Vargas, I.

    2016-05-01

    Gapless bilayer graphene (GBG), like monolayer graphene, is a material system with unique properties, such as anti-Klein tunneling and intrinsic Fano resonances. These properties rely on the gapless parabolic dispersion relation and the chiral nature of bilayer graphene electrons. In addition, propagating and evanescent electron states coexist inherently in this material, giving rise to these exotic properties. In this sense, bilayer graphene is unique, since in most material systems in which Fano resonance phenomena are manifested an external source that provides extended states is required. However, from a numerical standpoint, the presence of evanescent-divergent states in the eigenfunctions linear superposition representing the Dirac spinors, leads to a numerical degradation (the so called Ωd problem) in the practical applications of the standard Coefficient Transfer Matrix (K) method used to study charge transport properties in Bilayer Graphene based multi-barrier systems. We present here a straightforward procedure based in the hybrid compliance-stiffness matrix method (H) that can overcome this numerical degradation. Our results show that in contrast to standard matrix method, the proposed H method is suitable to study the transmission and transport properties of electrons in GBG superlattice since it remains numerically stable regardless the size of the superlattice and the range of values taken by the input parameters: the energy and angle of the incident electrons, the barrier height and the thickness and number of barriers. We show that the matrix determinant can be used as a test of the numerical accuracy in real calculations.

  4. An efficient and guaranteed stable numerical method for continuous modeling of infiltration and redistribution with a shallow dynamic water table

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Ogden, Fred L.; Steinke, Robert C.; Talbot, Cary A.

    2015-03-01

    We have developed a one-dimensional numerical method to simulate infiltration and redistribution in the presence of a shallow dynamic water table. This method builds upon the Green-Ampt infiltration with Redistribution (GAR) model and incorporates features from the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain. The redistribution scheme is more physically meaningful than the capillary weighted redistribution scheme in the T-O method. Groundwater dynamics are considered in this new method instead of hydrostatic groundwater front. It is also computationally more efficient than the T-O method. Motion of water in the vadose zone due to infiltration, redistribution, and interactions with capillary groundwater are described by ordinary differential equations. Numerical solutions to these equations are computationally less expensive than solutions of the highly nonlinear Richards' (1931) partial differential equation. We present results from numerical tests on 11 soil types using multiple rain pulses with different boundary conditions, with and without a shallow water table and compare against the numerical solution of Richards' equation (RE). Results from the new method are in satisfactory agreement with RE solutions in term of ponding time, deponding time, infiltration rate, and cumulative infiltrated depth. The new method, which we call "GARTO" can be used as an alternative to the RE for 1-D coupled surface and groundwater models in general situations with homogeneous soils with dynamic water table. The GARTO method represents a significant advance in simulating groundwater surface water interactions because it very closely matches the RE solution while being computationally efficient, with guaranteed mass conservation, and no stability limitations that can affect RE solvers in the case of a near-surface water table.

  5. A Conserving Discretization for the Free Boundary in a Two-Dimensional Stefan Problem

    NASA Astrophysics Data System (ADS)

    Segal, Guus; Vuik, Kees; Vermolen, Fred

    1998-03-01

    The dissolution of a disk-likeAl2Cuparticle is considered. A characteristic property is that initially the particle has a nonsmooth boundary. The mathematical model of this dissolution process contains a description of the particle interface, of which the position varies in time. Such a model is called a Stefan problem. It is impossible to obtain an analytical solution for a general two-dimensional Stefan problem, so we use the finite element method to solve this problem numerically. First, we apply a classical moving mesh method. Computations show that after some time steps the predicted particle interface becomes very unrealistic. Therefore, we derive a new method for the displacement of the free boundary based on the balance of atoms. This method leads to good results, also, for nonsmooth boundaries. Some numerical experiments are given for the dissolution of anAl2Cuparticle in anAl-Cualloy.

  6. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  7. Fractions--Concepts before Symbols.

    ERIC Educational Resources Information Center

    Bennett, Albert B., Jr.

    The learning difficulties that students experience with fractions begin immediately when they are shown fraction symbols with one numeral written above the other and told that the "top number" is called the numerator and the "bottom number" is called the denominator. This introduction to fractions will usually include a few visual diagrams to help…

  8. On Numerical Heating

    NASA Astrophysics Data System (ADS)

    Liou, Meng-Sing

    2013-11-01

    The development of computational fluid dynamics over the last few decades has yielded enormous successes and capabilities that are being routinely employed today; however there remain some open problems to be properly resolved. One example is the so-called overheating problem, which can arise in two very different scenarios, from either colliding or receding streams. Common in both is a localized, numerically over-predicted temperature. Von Neumann reported the former, a compressive overheating, nearly 70 years ago and numerically smeared the temperature peak by introducing artificial diffusion. However, the latter is unphysical in an expansive (rarefying) situation; it still dogs every method known to the author. We will present a study aiming at resolving this overheating problem and we find that: (1) the entropy increase is one-to-one linked to the increase in the temperature rise and (2) the overheating is inevitable in the current computational fluid dynamics framework in practice. Finally we will show a simple hybrid method that fundamentally cures the overheating problem in a rarefying flow, but also retains the property of accurate shock capturing. Moreover, this remedy (enhancement of current numerical methods) can be included easily in the present Eulerian codes. This work is performed under NASA's Fundamental Aeronautics Program.

  9. Rank-k modification methods for recursive least squares problems

    NASA Astrophysics Data System (ADS)

    Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam

    1994-09-01

    In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.

  10. Explicit Von Neumann Stability Conditions for the c-tau Scheme: A Basic Scheme in the Development of the CE-SE Courant Number Insensitive Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2005-01-01

    As part of the continuous development of the space-time conservation element and solution element (CE-SE) method, recently a set of so call ed "Courant number insensitive schemes" has been proposed. The key advantage of these new schemes is that the numerical dissipation associa ted with them generally does not increase as the Courant number decre ases. As such, they can be applied to problems with large Courant number disparities (such as what commonly occurs in Navier-Stokes problem s) without incurring excessive numerical dissipation.

  11. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  12. The contributions of Lewis Fry Richardson to drainage theory, soil physics, and the soil-plant-atmosphere continuum

    NASA Astrophysics Data System (ADS)

    Knight, John; Raats, Peter

    2016-04-01

    The EGU Division on Nonlinear Processes in Geophysics awards the Lewis Fry Richardson Medal. Richardson's significance is highlighted in http://www.egu.eu/awards-medals/portrait-lewis-fry-richardson/, but his contributions to soil physics and to numerical solutions of heat and diffusion equations are not mentioned. We would like to draw attention to those little known contributions. Lewis Fry Richardson (1881-1953) made important contributions to many fields including numerical weather prediction, finite difference solutions of partial differential equations, turbulent flow and diffusion, fractals, quantitative psychology and studies of conflict. He invented numerical weather prediction during World War I, although his methods were not successfully applied until 1950, after the invention of fast digital computers. In 1922 he published the book `Numerical weather prediction', of which few copies were sold and even fewer were read until the 1950s. To model heat and mass transfer in the atmosphere, he did much original work on turbulent flow and defined what is now known as the Richardson number. His technique for improving the convergence of a finite difference calculation is known as Richardson extrapolation, and was used by John Philip in his 1957 semi-analytical solution of the Richards equation for water movement in unsaturated soil. Richardson's first papers in 1908 concerned the numerical solution of the free surface problem of unconfined flow of water in saturated soil, arising in the design of drain spacing in peat. Later, for the lower boundary of his atmospheric model he needed to understand the movement of heat, liquid water and water vapor in what is now called the vadose zone and the soil plant atmosphere system, and to model coupled transfer of heat and flow of water in unsaturated soil. Finding little previous work, he formulated partial differential equations for transient, vertical flow of liquid water and for transfer of heat and water vapor. He paid considerable attention to the balances of water and energy at the soil-atmosphere and plant-atmosphere interfaces, making use of the concept of transfer resistance introduced by Brown and Escombe (1900) for leaf-atmosphere interfaces. He incorporated finite difference versions of all equations into his numerical weather forecasting model. From 1916, Richardson drove an ambulance in France in World War I, did weather computations in his spare time, and wrote a draft of his book. Later researchers such as L.A. Richards, D.A. de Vries and J.R. Philip from the 1930s to the 1950s were unaware that Richardson had anticipated many of their ideas on soil liquid water, heat, water vapor, and the soil-plant-atmosphere system. The Richards (1931) equation could rightly be called the Richardson (1922) equation! Richardson (1910) developed what we now call the Crank Nicolson implicit method for the heat or diffusion equation. To save effort, he used an explicit three level method after the first time step. Crank and Nicolson (1947) pointed out the instability in the explicit method, and used his implicit method for all time steps. Hanks and Bowers (1962) adapted the Crank Nicolson method to solve the Richards equation. So we could say that Hanks and Bowers used the Richardson finite difference method to solve the Richardson equation for soil water flow!

  13. Co-state initialization for the minimum-time low-thrust trajectory optimization

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya

    2017-05-01

    This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.

  14. Final Technical Report [Scalable methods for electronic excitations and optical responses of nanostructures: mathematics to algorithms to observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-03-19

    The master project under which this work is funded had as its main objective to develop computational methods for modeling electronic excited-state and optical properties of various nanostructures. The specific goals of the computer science group were primarily to develop effective numerical algorithms in Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TDDFT). There were essentially four distinct stated objectives. The first objective was to study and develop effective numerical algorithms for solving large eigenvalue problems such as those that arise in Density Functional Theory (DFT) methods. The second objective was to explore so-called linear scaling methods ormore » Methods that avoid diagonalization. The third was to develop effective approaches for Time-Dependent DFT (TDDFT). Our fourth and final objective was to examine effective solution strategies for other problems in electronic excitations, such as the GW/Bethe-Salpeter method, and quantum transport problems.« less

  15. Numerical computations of the dynamics of fluidic membranes and vesicles

    NASA Astrophysics Data System (ADS)

    Barrett, John W.; Garcke, Harald; Nürnberg, Robert

    2015-11-01

    Vesicles and many biological membranes are made of two monolayers of lipid molecules and form closed lipid bilayers. The dynamical behavior of vesicles is very complex and a variety of forms and shapes appear. Lipid bilayers can be considered as a surface fluid and hence the governing equations for the evolution include the surface (Navier-)Stokes equations, which in particular take the membrane viscosity into account. The evolution is driven by forces stemming from the curvature elasticity of the membrane. In addition, the surface fluid equations are coupled to bulk (Navier-)Stokes equations. We introduce a parametric finite-element method to solve this complex free boundary problem and present the first three-dimensional numerical computations based on the full (Navier-)Stokes system for several different scenarios. For example, the effects of the membrane viscosity, spontaneous curvature, and area difference elasticity (ADE) are studied. In particular, it turns out, that even in the case of no viscosity contrast between the bulk fluids, the tank treading to tumbling transition can be obtained by increasing the membrane viscosity. Besides the classical tank treading and tumbling motions, another mode (called the transition mode in this paper, but originally called the vacillating-breathing mode and subsequently also called trembling, transition, and swinging mode) separating these classical modes appears and is studied by us numerically. We also study how features of equilibrium shapes in the ADE and spontaneous curvature models, like budding behavior or starfish forms, behave in a shear flow.

  16. Simulating the electrohydrodynamics of a viscous droplet

    NASA Astrophysics Data System (ADS)

    Theillard, Maxime; Saintillan, David

    2016-11-01

    We present a novel numerical approach for the simulation of viscous drop placed in an electric field in two and three spatial dimensions. Our method is constructed as a stable projection method on Quad/Octree grids. Using a modified pressure correction we were able to alleviate the standard time step restriction incurred by capillary forces. In weak electric fields, our results match remarkably well with the predictions from the Taylor-Melcher leaky dielectric model. In strong electric fields the so-called Quincke rotation is correctly reproduced.

  17. Base-Calling Algorithm with Vocabulary (BCV) Method for Analyzing Population Sequencing Chromatograms

    PubMed Central

    Fantin, Yuri S.; Neverov, Alexey D.; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I.; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.

    2013-01-01

    Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and expensive procedure. Here, we propose the base-calling with vocabulary (BCV) method that computationally deciphers Sanger chromatograms obtained from mixed DNA samples. The inputs to the BCV algorithm are a chromatogram and a dictionary of sequences that are similar to those we expect to obtain. We apply the base-calling function on a test dataset of chromatograms without ambiguous positions, as well as one with 3–14% sequence degeneracy. Furthermore, we use BCV to assemble a consensus sequence for an HIV genome fragment in a sample containing a mixture of viral DNA variants and to determine the positions of the indels. Finally, we detect drug-resistant Mycobacterium tuberculosis strains carrying frameshift mutations mixed with wild-type bacteria in the pncA gene, and roughly characterize bacterial communities in clinical samples by direct 16S rRNA sequencing. PMID:23382983

  18. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  19. A numerical formulation and algorithm for limit and shakedown analysis of large-scale elastoplastic structures

    NASA Astrophysics Data System (ADS)

    Peng, Heng; Liu, Yinghua; Chen, Haofeng

    2018-05-01

    In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.

  20. Application of Gauss's law space-charge limited emission model in iterative particle tracking method

    NASA Astrophysics Data System (ADS)

    Altsybeyev, V. V.; Ponomarev, V. A.

    2016-11-01

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.

  1. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  2. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  3. Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    NASA Technical Reports Server (NTRS)

    Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.

    1997-01-01

    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.

  4. Numerical simulation of water hammer in low pressurized pipe: comparison of SimHydraulics and Lax-Wendroff method with experiment

    NASA Astrophysics Data System (ADS)

    Himr, D.

    2013-04-01

    Article describes simulation of unsteady flow during water hammer with two programs, which use different numerical approaches to solve ordinary one dimensional differential equations describing the dynamics of hydraulic elements and pipes. First one is Matlab-Simulink-SimHydraulics, which is a commercial software developed to solve the dynamics of general hydraulic systems. It defines them with block elements. The other software is called HYDRA and it is based on the Lax-Wendrff numerical method, which serves as a tool to solve the momentum and continuity equations. This program was developed in Matlab by Brno University of Technology. Experimental measurements were performed on a simple test rig, which consists of an elastic pipe with strong damping connecting two reservoirs. Water hammer is induced with fast closing the valve. Physical properties of liquid and pipe elasticity parameters were considered in both simulations, which are in very good agreement and differences in comparison with experimental data are minimal.

  5. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  6. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  7. Methods of Conceptual Clustering and their Relation to Numerical Taxonomy.

    DTIC Science & Technology

    1985-07-22

    the conceptual clustering problem is to first solve theaggregation problem, and then the characterization problem. In machine learning, the...cluster- ings by first generating some number of possible clusterings. For each clustering generated, one calls a learning from examples subroutine, which...class 1 from class 2, and vice versa, only the first combination implies a partition over the set of theoretically possible objects. The first

  8. Improving the Navy’s Passive Underwater Acoustic Monitoring of Marine Mammal Populations

    DTIC Science & Technology

    2013-09-30

    passive acoustic monitoring: Correcting humpback whale call detections for site-specific and time-dependent environmental characteristics ,” JASA Exp...marine mammal species using passive acoustic monitoring, with application to obtaining density estimates of transiting humpback whale populations in...minimize the variance of the density estimates, 3) to apply the numerical modeling methods for humpback whale vocalizations to understand distortions

  9. Static Methods in the Design of Nonlinear Automatic Control Systems,

    DTIC Science & Technology

    1984-06-27

    227 Chapter VI. Ways of Decrease of the Number of Statistical Nodes During the Research of Nonlinear Systems...at present occupies the central place. This region of research was called the statistical dynamics of nonlinear H automatic control systems...receives further development in the numerous research of Soviet and C foreign scientists. Special role in the development of the statistical dynamics of

  10. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  11. A benchmark study of the sea-level equation in GIA modelling

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Klemann, Volker; van der Wal, Wouter; Riva, Riccardo; Spada, Giorgio; Simon, Karen; Blank, Bas; Sun, Yu; Melini, Daniele; James, Tom; Bradley, Sarah

    2017-04-01

    The sea-level load in glacial isostatic adjustment (GIA) is described by the so called sea-level equation (SLE), which represents the mass redistribution between ice sheets and oceans on a deforming earth. Various levels of complexity of SLE have been proposed in the past, ranging from a simple mean global sea level (the so-called eustatic sea level) to the load with a deforming ocean bottom, migrating coastlines and a changing shape of the geoid. Several approaches to solve the SLE have been derived, from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, there has been no systematic intercomparison amongst the solvers through which the methods may be validated. The goal of this paper is to present a series of benchmark experiments designed for testing and comparing numerical implementations of the SLE. Our approach starts with simple load cases even though the benchmark will not result in GIA predictions for a realistic loading scenario. In the longer term we aim for a benchmark with a realistic loading scenario, and also for benchmark solutions with rotational feedback. The current benchmark uses an earth model for which Love numbers have been computed and benchmarked in Spada et al (2011). In spite of the significant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found can often be attributed to the different approximations inherent to the various algorithms. Literature G. Spada, V. R. Barletta, V. Klemann, R. E. M. Riva, Z. Martinec, P. Gasperini, B. Lund, D. Wolf, L. L. A. Vermeersen, and M. A. King, 2011. A benchmark study for glacial isostatic adjustment codes. Geophys. J. Int. 185: 106-132 doi:10.1111/j.1365-

  12. A new approach of watermarking technique by means multichannel wavelet functions

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Puccio, Luigia

    2012-12-01

    The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.

  13. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  14. Parametric methods outperformed non-parametric methods in comparisons of discrete numerical variables.

    PubMed

    Fagerland, Morten W; Sandvik, Leiv; Mowinckel, Petter

    2011-04-13

    The number of events per individual is a widely reported variable in medical research papers. Such variables are the most common representation of the general variable type called discrete numerical. There is currently no consensus on how to compare and present such variables, and recommendations are lacking. The objective of this paper is to present recommendations for analysis and presentation of results for discrete numerical variables. Two simulation studies were used to investigate the performance of hypothesis tests and confidence interval methods for variables with outcomes {0, 1, 2}, {0, 1, 2, 3}, {0, 1, 2, 3, 4}, and {0, 1, 2, 3, 4, 5}, using the difference between the means as an effect measure. The Welch U test (the T test with adjustment for unequal variances) and its associated confidence interval performed well for almost all situations considered. The Brunner-Munzel test also performed well, except for small sample sizes (10 in each group). The ordinary T test, the Wilcoxon-Mann-Whitney test, the percentile bootstrap interval, and the bootstrap-t interval did not perform satisfactorily. The difference between the means is an appropriate effect measure for comparing two independent discrete numerical variables that has both lower and upper bounds. To analyze this problem, we encourage more frequent use of parametric hypothesis tests and confidence intervals.

  15. Improved methods for simulating nearly extremal binary black holes

    NASA Astrophysics Data System (ADS)

    Scheel, Mark A.; Giesler, Matthew; Hemberger, Daniel A.; Lovelace, Geoffrey; Kuper, Kevin; Boyle, Michael; Szilágyi, Béla; Kidder, Lawrence E.

    2015-05-01

    Astrophysical black holes could be nearly extremal (that is, rotating nearly as fast as possible); therefore, nearly extremal black holes could be among the binaries that current and future gravitational-wave observatories will detect. Predicting the gravitational waves emitted by merging black holes requires numerical-relativity simulations, but these simulations are especially challenging when one or both holes have mass m and spin S exceeding the Bowen-York limit of S/{{m}2}=0.93. We present improved methods that enable us to simulate merging, nearly extremal black holes (i.e., black holes with S/{{m}2}\\gt 0.93) more robustly and more efficiently. We use these methods to simulate an unequal-mass, precessing binary black hole (BBH) coalescence, where the larger black hole has S/{{m}2}=0.99. We also use these methods to simulate a non-precessing BBH coalescence, where both black holes have S/{{m}2}=0.994, nearly reaching the Novikov-Thorne upper bound for holes spun up by thin accretion disks. We demonstrate numerical convergence and estimate the numerical errors of the waveforms; we compare numerical waveforms from our simulations with post-Newtonian and effective-one-body waveforms; we compare the evolution of the black hole masses and spins with analytic predictions; and we explore the effect of increasing spin magnitude on the orbital dynamics (the so-called ‘orbital hangup’ effect).

  16. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  17. A Numerical Method for Obtaining Monoenergetic Neutron Flux Distributions and Transmissions in Multiple-Region Slabs

    NASA Technical Reports Server (NTRS)

    Schneider, Harold

    1959-01-01

    This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.

  18. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  19. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  20. Novel optical scanning cryptography using Fresnel telescope imaging.

    PubMed

    Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren

    2015-07-13

    We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.

  1. Analytical study of temperature distribution in a rectangular porous fin considering both insulated and convective tip

    NASA Astrophysics Data System (ADS)

    Deshamukhya, Tuhin; Bhanja, Dipankar; Nath, Sujit; Maji, Ambarish; Choubey, Gautam

    2017-07-01

    The following study is concerned with determination of temperature distribution of porous fins under convective and insulated tip conditions. The authors have made an effort to study the effect of various important parameters involved in the transfer of heat through porous fins as well as the temperature distribution along the fin length subjected to both convective as well as insulated ends. The non-linear equation obtained has been solved by Adomian Decomposition method and validated with a numerical scheme called Finite Difference method by using a central difference scheme and Gauss Siedel Iterative method.

  2. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  3. A kinetic flux vector splitting scheme for shallow water equations incorporating variable bottom topography and horizontal temperature gradients.

    PubMed

    Saleem, M Rehan; Ashraf, Waqas; Zia, Saqib; Ali, Ishtiaq; Qamar, Shamsul

    2018-01-01

    This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme.

  4. A kinetic flux vector splitting scheme for shallow water equations incorporating variable bottom topography and horizontal temperature gradients

    PubMed Central

    2018-01-01

    This paper is concerned with the derivation of a well-balanced kinetic scheme to approximate a shallow flow model incorporating non-flat bottom topography and horizontal temperature gradients. The considered model equations, also called as Ripa system, are the non-homogeneous shallow water equations considering temperature gradients and non-uniform bottom topography. Due to the presence of temperature gradient terms, the steady state at rest is of primary interest from the physical point of view. However, capturing of this steady state is a challenging task for the applied numerical methods. The proposed well-balanced kinetic flux vector splitting (KFVS) scheme is non-oscillatory and second order accurate. The second order accuracy of the scheme is obtained by considering a MUSCL-type initial reconstruction and Runge-Kutta time stepping method. The scheme is applied to solve the model equations in one and two space dimensions. Several numerical case studies are carried out to validate the proposed numerical algorithm. The numerical results obtained are compared with those of staggered central NT scheme. The results obtained are also in good agreement with the recently published results in the literature, verifying the potential, efficiency, accuracy and robustness of the suggested numerical scheme. PMID:29851978

  5. Optimum tuned mass damper design using harmony search with comparison of classical methods

    NASA Astrophysics Data System (ADS)

    Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris

    2017-07-01

    As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.

  6. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    PubMed

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.

  7. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  8. Three-dimensional zonal grids about arbitrary shapes by Poisson's equation

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.

    1988-01-01

    A method for generating 3-D finite difference grids about or within arbitrary shapes is presented. The 3-D Poisson equations are solved numerically, with values for the inhomogeneous terms found automatically by the algorithm. Those inhomogeneous terms have the effect near boundaries of reducing cell skewness and imposing arbitrary cell height. The method allows the region of interest to be divided into zones (blocks), allowing the method to be applicable to almost any physical domain. A FORTRAN program called 3DGRAPE has been written to implement the algorithm. Lastly, a method for redistributing grid points along lines normal to boundaries will be described.

  9. Analysis of pressure-flow data in terms of computer-derived urethral resistance parameters.

    PubMed

    van Mastrigt, R; Kranse, M

    1995-01-01

    The simultaneous measurement of detrusor pressure and flow rate during voiding is at present the only way to measure or grade infravesical obstruction objectively. Numerous methods have been introduced to analyze the resulting data. These methods differ in aim (measurement of urethral resistance and/or diagnosis of obstruction), method (manual versus computerized data processing), theory or model used, and resolution (continuously variable parameters or a limited number of classes, the so-called monogram). In this paper, some aspects of these fundamental differences are discussed and illustrated. Subsequently, the properties and clinical performance of two computer-based methods for deriving continuous urethral resistance parameters are treated.

  10. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  11. Direct Numerical Simulation of Incompressible Pipe Flow Using a B-Spline Spectral Method

    NASA Technical Reports Server (NTRS)

    Loulou, Patrick; Moser, Robert D.; Mansour, Nagi N.; Cantwell, Brian J.

    1997-01-01

    A numerical method based on b-spline polynomials was developed to study incompressible flows in cylindrical geometries. A b-spline method has the advantages of possessing spectral accuracy and the flexibility of standard finite element methods. Using this method it was possible to ensure regularity of the solution near the origin, i.e. smoothness and boundedness. Because b-splines have compact support, it is also possible to remove b-splines near the center to alleviate the constraint placed on the time step by an overly fine grid. Using the natural periodicity in the azimuthal direction and approximating the streamwise direction as periodic, so-called time evolving flow, greatly reduced the cost and complexity of the computations. A direct numerical simulation of pipe flow was carried out using the method described above at a Reynolds number of 5600 based on diameter and bulk velocity. General knowledge of pipe flow and the availability of experimental measurements make pipe flow the ideal test case with which to validate the numerical method. Results indicated that high flatness levels of the radial component of velocity in the near wall region are physical; regions of high radial velocity were detected and appear to be related to high speed streaks in the boundary layer. Budgets of Reynolds stress transport equations showed close similarity with those of channel flow. However contrary to channel flow, the log layer of pipe flow is not homogeneous for the present Reynolds number. A topological method based on a classification of the invariants of the velocity gradient tensor was used. Plotting iso-surfaces of the discriminant of the invariants proved to be a good method for identifying vortical eddies in the flow field.

  12. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    NASA Astrophysics Data System (ADS)

    Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel

    In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.

    The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. Themore » results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.« less

  14. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  15. A comparison of the lattice discrete particle method to the finite-element method and the K&C material model for simulating the static and dynamic response of concrete.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jovanca J.; Bishop, Joseph E.

    2013-11-01

    This report summarizes the work performed by the graduate student Jovanca Smith during a summer internship in the summer of 2012 with the aid of mentor Joe Bishop. The projects were a two-part endeavor that focused on the use of the numerical model called the Lattice Discrete Particle Model (LDPM). The LDPM is a discrete meso-scale model currently used at Northwestern University and the ERDC to model the heterogeneous quasi-brittle material, concrete. In the first part of the project, LDPM was compared to the Karagozian and Case Concrete Model (K&C) used in Presto, an explicit dynamics finite-element code, developed atmore » Sandia National Laboratories. In order to make this comparison, a series of quasi-static numerical experiments were performed, namely unconfined uniaxial compression tests on four varied cube specimen sizes, three-point bending notched experiments on three proportional specimen sizes, and six triaxial compression tests on a cylindrical specimen. The second part of this project focused on the application of LDPM to simulate projectile perforation on an ultra high performance concrete called CORTUF. This application illustrates the strengths of LDPM over traditional continuum models.« less

  16. Quasi-symmetric designs and equiangular tight frames

    NASA Astrophysics Data System (ADS)

    Fickus, Matthew; Jasper, John; Mixon, Dustin; Peterson, Jesse

    2015-08-01

    An equiangular tight frame (ETF) is an M×N matrix which has orthogonal equal norm rows, equal norm columns, and the inner products of all pairs of columns have the same modulus. ETFs arise in numerous applications, including compressed sensing. They also seem to be rare: despite over a decade of active research by the community, only a few construction methods have been discovered. In this article we introduce a new construction of ETFs which uses a particular set of combinatorial designs called quasi-symmetric designs. For ETFs whose entries are contained in {+1;-1}, called real constant amplitude ETFs (RCAETFs), we see that this construction is reversible, giving new quasi-symmetric designs from the known constructions RCAETFs.

  17. MIB Galerkin method for elliptic interface problems.

    PubMed

    Xia, Kelin; Zhan, Meng; Wei, Guo-Wei

    2014-12-15

    Material interfaces are omnipresent in the real-world structures and devices. Mathematical modeling of material interfaces often leads to elliptic partial differential equations (PDEs) with discontinuous coefficients and singular sources, which are commonly called elliptic interface problems. The development of high-order numerical schemes for elliptic interface problems has become a well defined field in applied and computational mathematics and attracted much attention in the past decades. Despite of significant advances, challenges remain in the construction of high-order schemes for nonsmooth interfaces, i.e., interfaces with geometric singularities, such as tips, cusps and sharp edges. The challenge of geometric singularities is amplified when they are associated with low solution regularities, e.g., tip-geometry effects in many fields. The present work introduces a matched interface and boundary (MIB) Galerkin method for solving two-dimensional (2D) elliptic PDEs with complex interfaces, geometric singularities and low solution regularities. The Cartesian grid based triangular elements are employed to avoid the time consuming mesh generation procedure. Consequently, the interface cuts through elements. To ensure the continuity of classic basis functions across the interface, two sets of overlapping elements, called MIB elements, are defined near the interface. As a result, differentiation can be computed near the interface as if there is no interface. Interpolation functions are constructed on MIB element spaces to smoothly extend function values across the interface. A set of lowest order interface jump conditions is enforced on the interface, which in turn, determines the interpolation functions. The performance of the proposed MIB Galerkin finite element method is validated by numerical experiments with a wide range of interface geometries, geometric singularities, low regularity solutions and grid resolutions. Extensive numerical studies confirm the designed second order convergence of the MIB Galerkin method in the L ∞ and L 2 errors. Some of the best results are obtained in the present work when the interface is C 1 or Lipschitz continuous and the solution is C 2 continuous.

  18. Vibration analysis of angle-ply laminated composite plates with an embedded piezoceramic layer.

    PubMed

    Lin, Hsien-Yang; Huang, Jin-Hung; Ma, Chien-Ching

    2003-09-01

    An optical full-field technique, called amplitude-fluctuation electronic speckle pattern interferometry (AF-ESPI), is used in this study to investigate the force-induced transverse vibration of an angle-ply laminated composite embedded with a piezoceramic layer (piezolaminated plates). The piezolaminated plates are excited by applying time-harmonic voltages to the embedded piezoceramic layer. Because clear fringe patterns will appear only at resonant frequencies, both the resonant frequencies and mode shapes of the vibrating piezolaminated plates with five different fiber orientation angles are obtained by the proposed AF-ESPI method. A laser Doppler vibrometer (LDV) system that has the advantage of high resolution and broad dynamic range also is applied to measure the frequency response of piezolaminated plates. In addition to the two proposed optical techniques, numerical computations based on a commercial finite element package are presented for comparison with the experimental results. Three different numerical formulations are used to evaluate the vibration characteristics of piezolaminated plates. Good agreements of the measured data by the optical method and the numerical results predicted by the finite element method (FEM) demonstrate that the proposed methodology in this study is a powerful tool for the vibration analysis of piezolaminated plates.

  19. Directional ratio based on parabolic molecules and its application to the analysis of tubular structures

    NASA Astrophysics Data System (ADS)

    Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos

    2015-09-01

    As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.

  20. A Method for Analyzing Commonalities in Clinical Trial Target Populations

    PubMed Central

    He, Zhe; Carini, Simona; Hao, Tianyong; Sim, Ida; Weng, Chunhua

    2014-01-01

    ClinicalTrials.gov presents great opportunities for analyzing commonalities in clinical trial target populations to facilitate knowledge reuse when designing eligibility criteria of future trials or to reveal potential systematic biases in selecting population subgroups for clinical research. Towards this goal, this paper presents a novel data resource for enabling such analyses. Our method includes two parts: (1) parsing and indexing eligibility criteria text; and (2) mining common eligibility features and attributes of common numeric features (e.g., A1c). We designed and built a database called “Commonalities in Target Populations of Clinical Trials” (COMPACT), which stores structured eligibility criteria and trial metadata in a readily computable format. We illustrate its use in an example analytic module called CONECT using COMPACT as the backend. Type 2 diabetes is used as an example to analyze commonalities in the target populations of 4,493 clinical trials on this disease. PMID:25954450

  1. Numerical studies of the Bethe-Salpeter equation for a two-fermion bound state

    NASA Astrophysics Data System (ADS)

    de Paula, W.; Frederico, T.; Salmè, G.; Viviani, M.

    2018-03-01

    Some recent advances on the solution of the Bethe-Salpeter equation (BSE) for a two-fermion bound system directly in Minkowski space are presented. The calculations are based on the expression of the Bethe-Salpeter amplitude in terms of the so-called Nakanishi integral representation and on the light-front projection (i.e. the integration of the light-front variable k - = k 0 - k 3). The latter technique allows for the analytically exact treatment of the singularities plaguing the two-fermion BSE in Minkowski space. The good agreement observed between our results and those obtained using other existing numerical methods, based on both Minkowski and Euclidean space techniques, fully corroborate our analytical treatment.

  2. Numerical simulation of photocurrent generation in bilayer organic solar cells: Comparison of master equation and kinetic Monte Carlo approaches

    NASA Astrophysics Data System (ADS)

    Casalegno, Mosè; Bernardi, Andrea; Raos, Guido

    2013-07-01

    Numerical approaches can provide useful information about the microscopic processes underlying photocurrent generation in organic solar cells (OSCs). Among them, the Kinetic Monte Carlo (KMC) method is conceptually the simplest, but computationally the most intensive. A less demanding alternative is potentially represented by so-called Master Equation (ME) approaches, where the equations describing particle dynamics rely on the mean-field approximation and their solution is attained numerically, rather than stochastically. The description of charge separation dynamics, the treatment of electrostatic interactions and numerical stability are some of the key issues which have prevented the application of these methods to OSC modelling, despite of their successes in the study of charge transport in disordered system. Here we describe a three-dimensional ME approach to photocurrent generation in OSCs which attempts to deal with these issues. The reliability of the proposed method is tested against reference KMC simulations on bilayer heterojunction solar cells. Comparison of the current-voltage curves shows that the model well approximates the exact result for most devices. The largest deviations in current densities are mainly due to the adoption of the mean-field approximation for electrostatic interactions. The presence of deep traps, in devices characterized by strong energy disorder, may also affect result quality. Comparison of the simulation times reveals that the ME algorithm runs, on the average, one order of magnitude faster than KMC.

  3. Numerical algorithms for finite element computations on concurrent processors

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.

    1986-01-01

    The work of several graduate students which relate to the NASA grant is briefly summarized. One student has worked on a detailed analysis of the so-called ijk forms of Gaussian elemination and Cholesky factorization on concurrent processors. Another student has worked on the vectorization of the incomplete Cholesky conjugate method on the CYBER 205. Two more students implemented various versions of Gaussian elimination and Cholesky factorization on the FLEX/32.

  4. A reasoned overview on Boussinesq-type models: the interplay between physics, mathematics and numerics.

    PubMed

    Brocchini, Maurizio

    2013-12-08

    This paper, which is largely the fruit of an invited talk on the topic at the latest International Conference on Coastal Engineering, describes the state of the art of modelling by means of Boussinesq-type models (BTMs). Motivations for using BTMs as well as their fundamentals are illustrated, with special attention to the interplay between the physics to be described, the chosen model equations and the numerics in use. The perspective of the analysis is that of a physicist/engineer rather than of an applied mathematician. The chronological progress of the currently available BTMs from the pioneering models of the late 1960s is given. The main applications of BTMs are illustrated, with reference to specific models and methods. The evolution in time of the numerical methods used to solve BTMs (e.g. finite differences, finite elements, finite volumes) is described, with specific focus on finite volumes. Finally, an overview of the most important BTMs currently available is presented, as well as some indications on improvements required and fields of applications that call for attention.

  5. A reasoned overview on Boussinesq-type models: the interplay between physics, mathematics and numerics

    PubMed Central

    Brocchini, Maurizio

    2013-01-01

    This paper, which is largely the fruit of an invited talk on the topic at the latest International Conference on Coastal Engineering, describes the state of the art of modelling by means of Boussinesq-type models (BTMs). Motivations for using BTMs as well as their fundamentals are illustrated, with special attention to the interplay between the physics to be described, the chosen model equations and the numerics in use. The perspective of the analysis is that of a physicist/engineer rather than of an applied mathematician. The chronological progress of the currently available BTMs from the pioneering models of the late 1960s is given. The main applications of BTMs are illustrated, with reference to specific models and methods. The evolution in time of the numerical methods used to solve BTMs (e.g. finite differences, finite elements, finite volumes) is described, with specific focus on finite volumes. Finally, an overview of the most important BTMs currently available is presented, as well as some indications on improvements required and fields of applications that call for attention. PMID:24353475

  6. Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2001-01-01

    A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.

  7. Numerical and Experimental study of secondary flows in a rotating two-phase flow: the tea leaf paradox

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Neal, Douglas; Prevost, Richard; Mayrhofer, Arno; Lawrenz, Alan; Foss, John; Sotiropoulos, Fotis

    2015-11-01

    Secondary flows in a rotating flow in a cylinder, resulting in the so called ``tea leaf paradox'', are fundamental for understanding atmospheric pressure systems, developing techniques for separating red blood cells from the plasma, and even separating coagulated trub in the beer brewing process. We seek to gain deeper insights in this phenomenon by integrating numerical simulations and experiments. We employ the Curvilinear Immersed boundary method (CURVIB) of Calderer et al. (J. Comp. Physics 2014), which is a two-phase flow solver based on the level set method, to simulate rotating free-surface flow in a cylinder partially filled with water as in the tea leave paradox flow. We first demonstrate the validity of the numerical model by simulating a cylinder with a rotating base filled with a single fluid, obtaining results in excellent agreement with available experimental data. Then, we present results for the cylinder case with free surface, investigate the complex formation of secondary flow patterns, and show comparisons with new experimental data for this flow obtained by Lavision. Computational resources were provided by the Minnesota Supercomputing Institute.

  8. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  9. Determination of the transmission coefficients for quantum structures using FDTD method.

    PubMed

    Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan

    2011-12-01

    The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.

  10. SENS-5D trajectory and wind-sensitivity calculations for unguided rockets

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Huang, L. C. P.; Cook, R. A.

    1975-01-01

    A computational procedure is described which numerically integrates the equations of motion of an unguided rocket. Three translational and two angular (roll discarded) degrees of freedom are integrated through the final burnout; and then, through impact, only three translational motions are considered. Input to the routine is: initial time, altitude and velocity, vehicle characteristics, and other defined options. Input format has a wide range of flexibility for special calculations. Output is geared mainly to the wind-weighting procedure, and includes summary of trajectory at burnout, apogee and impact, summary of spent-stage trajectories, detailed position and vehicle data, unit-wind effects for head, tail and cross winds, coriolis deflections, range derivative, and the sensitivity curves (the so called F(Z) and DF(Z) curves). The numerical integration procedure is a fourth-order, modified Adams-Bashforth Predictor-Corrector method. This method is supplemented by a fourth-order Runge-Kutta method to start the integration at t=0 and whenever error criteria demand a change in step size.

  11. Numerical assessment of pulsating water jet in the conical diffusers

    NASA Astrophysics Data System (ADS)

    Tanasa, Constantin; Ciocan, Tiberiu; Muntean, Sebastian

    2017-11-01

    The hydraulic fluctuations associated with partial load operating conditions of Francis turbines are often periodic and characterized by the presence of a vortex rope. Two types of pressure fluctuations associated with the draft tube surge are identified in the literature. The first is an asynchronous (rotating) pressure fluctuation due to the precession of the helical vortex around the axis of the draft tube. The second type of fluctuation is a synchronous (plunging) fluctuation. The plunging fluctuations correspond to the flow field oscillations in the whole hydraulic passage, and are generally propagated overall in the hydraulic system. The paper introduced a new control method, which consists in injecting a pulsating axial water jet along to the draft tube axis. Nevertheless, the great calling of this control method is to mitigate the vortex rope effects targeting the vortex sheet and corresponding plunging component. In this paper, is presented our 3D numerical investigations with and without pulsating axial water jet control method in order to evaluate the concept.

  12. Methods of sequential estimation for determining initial data in numerical weather prediction. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cohn, S. E.

    1982-01-01

    Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.

  13. Fast calculation of low altitude disturbing gravity for ballistics

    NASA Astrophysics Data System (ADS)

    Wang, Jianqiang; Wang, Fanghao; Tian, Shasha

    2018-03-01

    Fast calculation of disturbing gravity is a key technology in ballistics while spherical cap harmonic(SCH) theory can be used to solve this problem. By using adjusted spherical cap harmonic(ASCH) methods, the spherical cap coordinates are projected into a global coordinates, then the non-integer associated Legendre functions(ALF) of SCH are replaced by integer ALF of spherical harmonics(SH). This new method is called virtual spherical harmonics(VSH) and some numerical experiment were done to test the effect of VSH. The results of earth's gravity model were set as the theoretical observation, and the model of regional gravity field was constructed by the new method. Simulation results show that the approximated errors are less than 5mGal in the low altitude range of the central region. In addition, numerical experiments were conducted to compare the calculation speed of SH model, SCH model and VSH model, and the results show that the calculation speed of the VSH model is raised one order magnitude in a small scope.

  14. Modeling Poroelastic Wave Propagation in a Real 2-D Complex Geological Structure Obtained via Self-Organizing Maps

    NASA Astrophysics Data System (ADS)

    Itzá Balam, Reymundo; Iturrarán-Viveros, Ursula; Parra, Jorge O.

    2018-03-01

    Two main stages of seismic modeling are geological model building and numerical computation of seismic response for the model. The quality of the computed seismic response is partly related to the type of model that is built. Therefore, the model building approaches become as important as seismic forward numerical methods. For this purpose, three petrophysical facies (sands, shales and limestones) are extracted from reflection seismic data and some seismic attributes via the clustering method called Self-Organizing Maps (SOM), which, in this context, serves as a geological model building tool. This model with all its properties is the input to the Optimal Implicit Staggered Finite Difference (OISFD) algorithm to create synthetic seismograms for poroelastic, poroacoustic and elastic media. The results show a good agreement between observed and 2-D synthetic seismograms. This demonstrates that the SOM classification method enables us to extract facies from seismic data and allows us to integrate the lithology at the borehole scale with the 2-D seismic data.

  15. COHERENT NETWORK ANALYSIS FOR CONTINUOUS GRAVITATIONAL WAVE SIGNALS IN A PULSAR TIMING ARRAY: PULSAR PHASES AS EXTRINSIC PARAMETERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn

    2015-12-20

    Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less

  16. Numerical method for computing Maass cusp forms on triply punctured two-sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, K. T.; Kamari, H. M.; Zainuddin, H.

    2014-03-05

    A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less

  17. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  18. Analysis and Assessment of Environmental Load of Vending Machines by a LCA Method, and Eco-Improvement Effect

    NASA Astrophysics Data System (ADS)

    Kimura, Yukio; Sadamichi, Yucho; Maruyama, Naoki; Kato, Seizo

    These days the environmental impact due to vending machines'(VM) diffusion has greatly been discussed. This paper describes the numerical evaluation of the environmental impact by using the LCA (Life Cycle Assessment) scheme and then proposes eco-improvements' strategy toward environmentally conscious products(ECP). A new objective and universal consolidated method for the LCA-evaluation, so-called LCA-NETS(Numerical Eco-load Standardization ) developed by the authors is applied to the present issue. As a result, the environmental loads at the 5years' operation and the material procurement stages are found to dominate others over the life cycle. Further eco-improvement is realized by following the order of the LCA-NETS magnitude; namely, energy saving, materials reducing, parts' re-using, and replacing with low environmental load material. Above all, parts' re-using is specially recommendable for significant reduction of the environmental loads toward ECP.

  19. The CALL-SLA Interface: Insights from a Second-Order Synthesis

    ERIC Educational Resources Information Center

    Plonsky, Luke; Ziegler, Nicole

    2016-01-01

    The relationship between computer-assisted language learning (CALL) and second language acquisition (SLA) has been studied both extensively, covering numerous subdomains, and intensively, resulting in hundreds of primary studies. It is therefore no surprise that CALL researchers, as in other areas of applied linguistics, have turned in recent…

  20. Semiclassical evaluation of quantum fidelity

    NASA Astrophysics Data System (ADS)

    Vaníček, Jiří; Heller, Eric J.

    2003-11-01

    We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.

  1. Investigation of heat transfer and material flow of P-FSSW: Experimental and numerical study

    NASA Astrophysics Data System (ADS)

    Rezazadeh, Niki; Mosavizadeh, Seyed Mostafa; Azizi, Hamed

    2018-02-01

    Friction stir spot welding (FSSW) is the joining process which utilizes a rotating tool consisting of a shoulder and/or a probe. In this study, the novel method of FSSW, which is called protrusion friction stir spot welding (P-FSSW), has been presented and effect of shoulder diameter parameter has been studied numerically and experimentally on the weld quality including temperature field, velocity contour, material flow, bonding length, and the depth of the stirred area. The results show that the numerical findings are in good agreement with experimental measurements. The present model could well predict the temperature distribution, velocity contour, depth of the stirred area, and the bonding length. As the shoulder diameter increases, the amount of temperature rises which leads to a rise in stirred area depth, bonding length and temperatures and velocities. Therefore, a weld of higher quality will be performed.

  2. Using an integrated information system to reduce interruptions and the number of non-relevant contacts in the inpatient pharmacy at tertiary hospital.

    PubMed

    Binobaid, Saleh; Almeziny, Mohammed; Fan, Ip-Shing

    2017-07-01

    Patient care is provided by a multidisciplinary team of healthcare professionals intended for high-quality and safe patient care. Accordingly, the team must work synergistically and communicate efficiently. In many hospitals, nursing and pharmacy communication relies mainly on telephone calls. In fact, numerous studies have reported telephone calls as a source of interruption for both pharmacy and nursing operations; therefore, the workload increases and the chance of errors raises. This report describes the implementation of an integrated information system that possibly can reduce telephone calls through providing real-time tracking capabilities and sorting prescriptions urgency, thus significantly improving traceability of all prescriptions inside pharmacy. The research design is based on a quasi-experiment using pre-post testing using the continuous improvement approach. The improvement project is performed using a six-step method. A survey was conducted in Prince Sultan Military Medical City (PSMMC) to measure the volume and types of telephone calls before and after implementation to evaluate the impact of the new system. Beforehand of the system implementation, during the two-week measurement period, all pharmacies received 4466 calls and the majority were follow-up calls. Subsequently of the integrated system rollout, there was a significant reduction ( p  > 0.001) in the volume of telephone calls to 2630 calls; besides, the calls nature turned out to be more professional inquiries ( p  > 0.001). As a result, avoidable interruptions and workload were decreased.

  3. Development of Light-Activated CRISPR Using Guide RNAs with Photocleavable Protectors.

    PubMed

    Jain, Piyush K; Ramanan, Vyas; Schepers, Arnout G; Dalvie, Nisha S; Panda, Apekshya; Fleming, Heather E; Bhatia, Sangeeta N

    2016-09-26

    The ability to remotely trigger CRISPR/Cas9 activity would enable new strategies to study cellular events with greater precision and complexity. In this work, we have developed a method to photocage the activity of the guide RNA called "CRISPR-plus" (CRISPR-precise light-mediated unveiling of sgRNAs). The photoactivation capability of our CRISPR-plus method is compatible with the simultaneous targeting of multiple DNA sequences and supports numerous modifications that can enable guide RNA labeling for use in imaging and mechanistic investigations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Multi-domain boundary element method for axi-symmetric layered linear acoustic systems

    NASA Astrophysics Data System (ADS)

    Reiter, Paul; Ziegelwanger, Harald

    2017-12-01

    Homogeneous porous materials like rock wool or synthetic foam are the main tool for acoustic absorption. The conventional absorbing structure for sound-proofing consists of one or multiple absorbers placed in front of a rigid wall, with or without air-gaps in between. Various models exist to describe these so called multi-layered acoustic systems mathematically for incoming plane waves. However, there is no efficient method to calculate the sound field in a half space above a multi layered acoustic system for an incoming spherical wave. In this work, an axi-symmetric multi-domain boundary element method (BEM) for absorbing multi layered acoustic systems and incoming spherical waves is introduced. In the proposed BEM formulation, a complex wave number is used to model absorbing materials as a fluid and a coordinate transformation is introduced which simplifies singular integrals of the conventional BEM to non-singular radial and angular integrals. The radial and angular part are integrated analytically and numerically, respectively. The output of the method can be interpreted as a numerical half space Green's function for grounds consisting of layered materials.

  5. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  6. Application of Methods of Numerical Analysis to Physical and Engineering Data.

    DTIC Science & Technology

    1980-10-15

    directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain

  7. Algorithms and software for solving finite element equations on serial and parallel architectures

    NASA Technical Reports Server (NTRS)

    George, Alan

    1989-01-01

    Over the past 15 years numerous new techniques have been developed for solving systems of equations and eigenvalue problems arising in finite element computations. A package called SPARSPAK has been developed by the author and his co-workers which exploits these new methods. The broad objective of this research project is to incorporate some of this software in the Computational Structural Mechanics (CSM) testbed, and to extend the techniques for use on multiprocessor architectures.

  8. Comparison of vibrational conductivity and radiative energy transfer methods

    NASA Astrophysics Data System (ADS)

    Le Bot, A.

    2005-05-01

    This paper is concerned with the comparison of two methods well suited for the prediction of the wideband response of built-up structures subjected to high-frequency vibrational excitation. The first method is sometimes called the vibrational conductivity method and the second one is rather known as the radiosity method in the field of acoustics, or the radiative energy transfer method. Both are based on quite similar physical assumptions i.e. uncorrelated sources, mean response and high-frequency excitation. Both are based on analogies with some equations encountered in the field of heat transfer. However these models do not lead to similar results. This paper compares the two methods. Some numerical simulations on a pair of plates joined along one edge are provided to illustrate the discussion.

  9. Cavity radiation model for solar central receivers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipps, F.W.

    1981-01-01

    The Energy Laboratory of the University of Houston has developed a computer simulation program called CREAM (i.e., Cavity Radiations Exchange Analysis Model) for application to the solar central receiver system. The zone generating capability of CREAM has been used in several solar re-powering studies. CREAM contains a geometric configuration factor generator based on Nusselt's method. A formulation of Nusselt's method provides support for the FORTRAN subroutine NUSSELT. Numerical results from NUSSELT are compared to analytic values and values from Sparrow's method. Sparrow's method is based on a double contour integral and its reduction to a single integral which is approximatedmore » by Guassian methods. Nusselt's method is adequate for the intended engineering applications, but Sparrow's method is found to be an order of magnitude more efficient in many situations.« less

  10. Acoustic imaging of a duct spinning mode by the use of an in-duct circular microphone array.

    PubMed

    Wei, Qingkai; Huang, Xun; Peers, Edward

    2013-06-01

    An imaging method of acoustic spinning modes propagating within a circular duct simply with surface pressure information is introduced in this paper. The proposed method is developed in a theoretical way and is demonstrated by a numerical simulation case. Nowadays, the measurements within a duct have to be conducted using in-duct microphone array, which is unable to provide information of complete acoustic solutions across the test section. The proposed method can estimate immeasurable information by forming a so-called observer. The fundamental idea behind the testing method was originally developed in control theory for ordinary differential equations. Spinning mode propagation, however, is formulated in partial differential equations. A finite difference technique is used to reduce the associated partial differential equations to a classical form in control. The observer method can thereafter be applied straightforwardly. The algorithm is recursive and, thus, could be operated in real-time. A numerical simulation for a straight circular duct is conducted. The acoustic solutions on the test section can be reconstructed with good agreement to analytical solutions. The results suggest the potential and applications of the proposed method.

  11. Vibration band gaps for elastic metamaterial rods using wave finite element method

    NASA Astrophysics Data System (ADS)

    Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C.

    2016-10-01

    Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are demonstrated and the results present good approximation to the experimental data.

  12. Vocal behavior and risk assessment in wild chimpanzees

    NASA Astrophysics Data System (ADS)

    Wilson, Michael L.; Hauser, Marc D.; Wrangham, Richard W.

    2005-09-01

    If, as theory predicts, animal communication is designed to manipulate the behavior of others to personal advantage, then there will be certain contexts in which vocal behavior is profitable and other cases where silence is favored. Studies conducted in Kibale National Park, Uganda investigated whether chimpanzees modified their vocal behavior according to different levels of risk from intergroup aggression, including relative numerical strength and location in range. Playback experiments tested numerical assessment, and observations of chimpanzees throughout their range tested whether they called less frequently to avoid detection in border areas. Chimpanzees were more likely to call to playback of a stranger's call if they greatly outnumbered the stranger. Chimpanzees tended to reduce calling in border areas, but not in all locations. Chimpanzees most consistently remained silent when raiding crops: they almost never gave loud pant-hoot calls when raiding banana plantations outside the park, even though they normally give many pant-hoots on arrival at high-quality food resources. These findings indicate that chimpanzees have the capacity to reduce loud call production when appropriate, but that additional factors, such as advertising territory ownership, contribute to the costs and benefits of calling in border zones.

  13. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitationalmore » acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.« less

  14. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  15. Transient loads analysis for space flight applications

    NASA Technical Reports Server (NTRS)

    Thampi, S. K.; Vidyasagar, N. S.; Ganesan, N.

    1992-01-01

    A significant part of the flight readiness verification process involves transient analysis of the coupled Shuttle-payload system to determine the low frequency transient loads. This paper describes a methodology for transient loads analysis and its implementation for the Spacelab Life Sciences Mission. The analysis is carried out using two major software tools - NASTRAN and an external FORTRAN code called EZTRAN. This approach is adopted to overcome some of the limitations of NASTRAN's standard transient analysis capabilities. The method uses Data Recovery Matrices (DRM) to improve computational efficiency. The mode acceleration method is fully implemented in the DRM formulation to recover accurate displacements, stresses, and forces. The advantages of the method are demonstrated through a numerical example.

  16. Development of the mathematical model for design and verification of acoustic modal analysis methods

    NASA Astrophysics Data System (ADS)

    Siner, Alexander; Startseva, Maria

    2016-10-01

    To reduce the turbofan noise it is necessary to develop methods for the analysis of the sound field generated by the blade machinery called modal analysis. Because modal analysis methods are very difficult and their testing on the full scale measurements are very expensive and tedious it is necessary to construct some mathematical models allowing to test modal analysis algorithms fast and cheap. At this work the model allowing to set single modes at the channel and to analyze generated sound field is presented. Modal analysis of the sound generated by the ring array of point sound sources is made. Comparison of experimental and numerical modal analysis results is presented at this work.

  17. Study of grid independence of finite element method on MHD free convective casson fluid flow with slip effect

    NASA Astrophysics Data System (ADS)

    Raju, R. Srinivasa; Ramesh, K.

    2018-05-01

    The purpose of this work is to study the grid independence of finite element method on MHD Casson fluid flow past a vertically inclined plate filled in a porous medium in presence of chemical reaction, heat absorption, an external magnetic field and slip effect has been investigated. For this study of grid independence, a mathematical model is developed and analyzed by using appropriate mathematical technique, called finite element method. Grid study discussed with the help of numerical values of velocity, temperature and concentration profiles in tabular form. avourable comparisons with previously published work on various special cases of the problem are obtained.

  18. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  19. Algorithms for the computation of solutions of the Ornstein-Zernike equation.

    PubMed

    Peplow, A T; Beardmore, R E; Bresme, F

    2006-10-01

    We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.

  20. Communicating and Interacting: An Exploration of the Changing Roles of Media in CALL/CMC

    ERIC Educational Resources Information Center

    Hoven, Debra

    2006-01-01

    The sites of learning and teaching using CALL are shifting from CD-based, LAN-based, or stand-alone programs to the Internet. As this change occurs, pedagogical approaches to using CALL are also shifting to forms which better exploit the communication, collaboration, and negotiation aspects of the Internet. Numerous teachers and designers have…

  1. ICT-Mediated Science Inquiry: The Remote Access Microscopy Project (RAMP)

    ERIC Educational Resources Information Center

    Hunt, John

    2007-01-01

    The calls for the transformation of how science is taught (and what is taught) are numerous and show no sign of abating. Common amongst these calls is the need to shift from the traditional teaching and learning towards a model that represents the social constructivist epistemology. These calls have coincided with the Internet revolution. Through…

  2. The planar multijunction cell - A new solar cell for earth and space

    NASA Technical Reports Server (NTRS)

    Evans, J. C., Jr.; Chai, A.-T.; Goradia, C.

    1980-01-01

    A new family of high-voltage solar cells, called the planar multijunction (PMJ) cell is being developed. The new cells combine the attractive features of planar cells with conventional or interdigitated back contacts and the vertical multijunction (VMJ) solar cell. The PMJ solar cell is internally divided into many voltage-generating regions, called unit cells, which are internally connected in series. The key to obtaining reasonable performance from this device was the separation of top surface field regions over each active unit cell area. Using existing solar cell fabricating methods, output voltages in excess of 20 volts per linear centimeter are possible. Analysis of the new device is complex, and numerous geometries are being studied which should provide substantial benefits in both normal sunlight usage as well as with concentrators.

  3. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  4. An AMR capable finite element diffusion solver for ALE hydrocodes [An AMR capable diffusion solver for ALE-AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, A. C.; Bailey, D. S.; Kaiser, T. B.

    2015-02-01

    Here, we present a novel method for the solution of the diffusion equation on a composite AMR mesh. This approach is suitable for including diffusion based physics modules to hydrocodes that support ALE and AMR capabilities. To illustrate, we proffer our implementations of diffusion based radiation transport and heat conduction in a hydrocode called ALE-AMR. Numerical experiments conducted with the diffusion solver and associated physics packages yield 2nd order convergence in the L 2 norm.

  5. Vecuum: identification and filtration of false somatic variants caused by recombinant vector contamination.

    PubMed

    Kim, Junho; Maeng, Ju Heon; Lim, Jae Seok; Son, Hyeonju; Lee, Junehawk; Lee, Jeong Ho; Kim, Sangwoo

    2016-10-15

    Advances in sequencing technologies have remarkably lowered the detection limit of somatic variants to a low frequency. However, calling mutations at this range is still confounded by many factors including environmental contamination. Vector contamination is a continuously occurring issue and is especially problematic since vector inserts are hardly distinguishable from the sample sequences. Such inserts, which may harbor polymorphisms and engineered functional mutations, can result in calling false variants at corresponding sites. Numerous vector-screening methods have been developed, but none could handle contamination from inserts because they are focusing on vector backbone sequences alone. We developed a novel method-Vecuum-that identifies vector-originated reads and resultant false variants. Since vector inserts are generally constructed from intron-less cDNAs, Vecuum identifies vector-originated reads by inspecting the clipping patterns at exon junctions. False variant calls are further detected based on the biased distribution of mutant alleles to vector-originated reads. Tests on simulated and spike-in experimental data validated that Vecuum could detect 93% of vector contaminants and could remove up to 87% of variant-like false calls with 100% precision. Application to public sequence datasets demonstrated the utility of Vecuum in detecting false variants resulting from various types of external contamination. Java-based implementation of the method is available at http://vecuum.sourceforge.net/ CONTACT: swkim@yuhs.acSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Efficient C1-continuous phase-potential upwind (C1-PPU) schemes for coupled multiphase flow and transport with gravity

    NASA Astrophysics Data System (ADS)

    Jiang, Jiamin; Younis, Rami M.

    2017-10-01

    In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.

  7. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol.

    PubMed

    Zeng, Ping; Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on-all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications.

  8. A multi-pattern hash-binary hybrid algorithm for URL matching in the HTTP protocol

    PubMed Central

    Tan, Qingping; Meng, Xiankai; Shao, Zeming; Xie, Qinzheng; Yan, Ying; Cao, Wei; Xu, Jianjun

    2017-01-01

    In this paper, based on our previous multi-pattern uniform resource locator (URL) binary-matching algorithm called HEM, we propose an improved multi-pattern matching algorithm called MH that is based on hash tables and binary tables. The MH algorithm can be applied to the fields of network security, data analysis, load balancing, cloud robotic communications, and so on—all of which require string matching from a fixed starting position. Our approach effectively solves the performance problems of the classical multi-pattern matching algorithms. This paper explores ways to improve string matching performance under the HTTP protocol by using a hash method combined with a binary method that transforms the symbol-space matching problem into a digital-space numerical-size comparison and hashing problem. The MH approach has a fast matching speed, requires little memory, performs better than both the classical algorithms and HEM for matching fields in an HTTP stream, and it has great promise for use in real-world applications. PMID:28399157

  9. Contact stresses in gear teeth: A new method of analysis

    NASA Technical Reports Server (NTRS)

    Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.

    1991-01-01

    A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.

  10. PFEM-based modeling of industrial granular flows

    NASA Astrophysics Data System (ADS)

    Cante, J.; Dávalos, C.; Hernández, J. A.; Oliver, J.; Jonsén, P.; Gustafsson, G.; Häggblad, H.-Å.

    2014-05-01

    The potential of numerical methods for the solution and optimization of industrial granular flows problems is widely accepted by the industries of this field, the challenge being to promote effectively their industrial practice. In this paper, we attempt to make an exploratory step in this regard by using a numerical model based on continuous mechanics and on the so-called Particle Finite Element Method (PFEM). This goal is achieved by focusing two specific industrial applications in mining industry and pellet manufacturing: silo discharge and calculation of power draw in tumbling mills. Both examples are representative of variations on the granular material mechanical response—varying from a stagnant configuration to a flow condition. The silo discharge is validated using the experimental data, collected on a full-scale flat bottomed cylindrical silo. The simulation is conducted with the aim of characterizing and understanding the correlation between flow patterns and pressures for concentric discharges. In the second example, the potential of PFEM as a numerical tool to track the positions of the particles inside the drum is analyzed. Pressures and wall pressures distribution are also studied. The power draw is also computed and validated against experiments in which the power is plotted in terms of the rotational speed of the drum.

  11. Free Vibration Analysis of DWCNTs Using CDM and Rayleigh-Schmidt Based on Nonlocal Euler-Bernoulli Beam Theory

    PubMed Central

    2014-01-01

    The free vibration response of double-walled carbon nanotubes (DWCNTs) is investigated. The DWCNTs are modelled as two beams, interacting between them through the van der Waals forces, and the nonlocal Euler-Bernoulli beam theory is used. The governing equations of motion are derived using a variational approach and the free frequencies of vibrations are obtained employing two different approaches. In the first method, the two double-walled carbon nanotubes are discretized by means of the so-called “cell discretization method” (CDM) in which each nanotube is reduced to a set of rigid bars linked together by elastic cells. The resulting discrete system takes into account nonlocal effects, constraint elasticities, and the van der Waals forces. The second proposed approach, belonging to the semianalytical methods, is an optimized version of the classical Rayleigh quotient, as proposed originally by Schmidt. The resulting conditions are solved numerically. Numerical examples end the paper, in which the two approaches give lower-upper bounds to the true values, and some comparisons with existing results are offered. Comparisons of the present numerical results with those from the open literature show an excellent agreement. PMID:24715807

  12. Identification of source velocities on 3D structures in non-anechoic environments: Theoretical background and experimental validation of the inverse patch transfer functions method

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; Totaro, N.; Guyader, J.-L.

    2010-08-01

    In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.

  13. A new uniformly valid asymptotic integration algorithm for elasto-plastic creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Walker, Kevin P.

    1991-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  14. A new uniformly valid asymptotic integration algorithm for elasto-plastic-creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, A.; Walker, K. P.

    1989-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  15. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  16. Fast and simple acquisition of solid-state 14N NMR spectra with signal enhancement via population transfer.

    PubMed

    O'Dell, Luke A; Schurko, Robert W

    2009-05-20

    A new approach for the acquisition of static, wideline (14)N NMR powder patterns is outlined. The method involves the use of frequency-swept pulses which serve two simultaneous functions: (1) broad-band excitation of magnetization and (2) signal enhancement via population transfer. The signal enhancement mechanism is described using numerical simulations and confirmed experimentally. This approach, which we call DEISM (Direct Enhancement of Integer Spin Magnetization), allows high-quality (14)N spectra to be acquired at intermediate field strengths in an uncomplicated way and in a fraction of the time required for previously reported methods.

  17. Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2000-01-01

    The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.

  18. Space-Time Conservation Element and Solution Element Method Being Developed

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Jorgenson, Philip C. E.; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Sheng-Tao

    1999-01-01

    The engineering research and design requirements of today pose great computer-simulation challenges to engineers and scientists who are called on to analyze phenomena in continuum mechanics. The future will bring even more daunting challenges, when increasingly complex phenomena must be analyzed with increased accuracy. Traditionally used numerical simulation methods have evolved to their present state by repeated incremental extensions to broaden their scope. They are reaching the limits of their applicability and will need to be radically revised, at the very least, to meet future simulation challenges. At the NASA Lewis Research Center, researchers have been developing a new numerical framework for solving conservation laws in continuum mechanics, namely, the Space-Time Conservation Element and Solution Element Method, or the CE/SE method. This method has been built from fundamentals and is not a modification of any previously existing method. It has been designed with generality, simplicity, robustness, and accuracy as cornerstones. The CE/SE method has thus far been applied in the fields of computational fluid dynamics, computational aeroacoustics, and computational electromagnetics. Computer programs based on the CE/SE method have been developed for calculating flows in one, two, and three spatial dimensions. Results have been obtained for numerous problems and phenomena, including various shock-tube problems, ZND detonation waves, an implosion and explosion problem, shocks over a forward-facing step, a blast wave discharging from a nozzle, various acoustic waves, and shock/acoustic-wave interactions. The method can clearly resolve shock/acoustic-wave interactions, wherein the difference of the magnitude between the acoustic wave and shock could be up to six orders. In two-dimensional flows, the reflected shock is as crisp as the leading shock. CE/SE schemes are currently being used for advanced applications to jet and fan noise prediction and to chemically reacting flows.

  19. Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Baskaran, Subbiah; Noever, D.

    1999-01-01

    Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.

  20. Multiple crack detection in 3D using a stable XFEM and global optimization

    NASA Astrophysics Data System (ADS)

    Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.

    2018-02-01

    A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.

  1. Hidden attractors in dynamical systems

    NASA Astrophysics Data System (ADS)

    Dudkowski, Dawid; Jafari, Sajad; Kapitaniak, Tomasz; Kuznetsov, Nikolay V.; Leonov, Gennady A.; Prasad, Awadhesh

    2016-06-01

    Complex dynamical systems, ranging from the climate, ecosystems to financial markets and engineering applications typically have many coexisting attractors. This property of the system is called multistability. The final state, i.e., the attractor on which the multistable system evolves strongly depends on the initial conditions. Additionally, such systems are very sensitive towards noise and system parameters so a sudden shift to a contrasting regime may occur. To understand the dynamics of these systems one has to identify all possible attractors and their basins of attraction. Recently, it has been shown that multistability is connected with the occurrence of unpredictable attractors which have been called hidden attractors. The basins of attraction of the hidden attractors do not touch unstable fixed points (if exists) and are located far away from such points. Numerical localization of the hidden attractors is not straightforward since there are no transient processes leading to them from the neighborhoods of unstable fixed points and one has to use the special analytical-numerical procedures. From the viewpoint of applications, the identification of hidden attractors is the major issue. The knowledge about the emergence and properties of hidden attractors can increase the likelihood that the system will remain on the most desirable attractor and reduce the risk of the sudden jump to undesired behavior. We review the most representative examples of hidden attractors, discuss their theoretical properties and experimental observations. We also describe numerical methods which allow identification of the hidden attractors.

  2. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  3. Simplified and refined structural modeling for economical flutter analysis and design

    NASA Technical Reports Server (NTRS)

    Ricketts, R. H.; Sobieszczanski, J.

    1977-01-01

    A coordinated use of two finite-element models of different levels of refinement is presented to reduce the computer cost of the repetitive flutter analysis commonly encountered in structural resizing to meet flutter requirements. One model, termed a refined model (RM), represents a high degree of detail needed for strength-sizing and flutter analysis of an airframe. The other model, called a simplified model (SM), has a relatively much smaller number of elements and degrees-of-freedom. A systematic method of deriving an SM from a given RM is described. The method consists of judgmental and numerical operations to make the stiffness and mass of the SM elements equivalent to the corresponding substructures of RM. The structural data are automatically transferred between the two models. The bulk of analysis is performed on the SM with periodical verifications carried out by analysis of the RM. In a numerical example of a supersonic cruise aircraft with an arrow wing, this approach permitted substantial savings in computer costs and acceleration of the job turn-around.

  4. A discrete geometric approach for simulating the dynamics of thin viscous threads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audoly, B., E-mail: audoly@lmm.jussieu.fr; Clauvelin, N.; Brun, P.-T.

    We present a numerical model for the dynamics of thin viscous threads based on a discrete, Lagrangian formulation of the smooth equations. The model makes use of a condensed set of coordinates, called the centerline/spin representation: the kinematic constraints linking the centerline's tangent to the orientation of the material frame is used to eliminate two out of three degrees of freedom associated with rotations. Based on a description of twist inspired from discrete differential geometry and from variational principles, we build a full-fledged discrete viscous thread model, which includes in particular a discrete representation of the internal viscous stress. Consistencymore » of the discrete model with the classical, smooth equations for thin threads is established formally. Our numerical method is validated against reference solutions for steady coiling. The method makes it possible to simulate the unsteady behavior of thin viscous threads in a robust and efficient way, including the combined effects of inertia, stretching, bending, twisting, large rotations and surface tension.« less

  5. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less

  6. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    DOE PAGES

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...

    2016-02-22

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less

  7. A stabilized element-based finite volume method for poroelastic problems

    NASA Astrophysics Data System (ADS)

    Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo

    2018-07-01

    The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.

  8. An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael R Tonks; Derek R Gaston; Paul C Millett

    2012-01-01

    The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of newmore » models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.« less

  9. Generalized Predictive Control of Dynamic Systems with Rigid-Body Modes

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    2013-01-01

    Numerical simulations to assess the effectiveness of Generalized Predictive Control (GPC) for active control of dynamic systems having rigid-body modes are presented. GPC is a linear, time-invariant, multi-input/multi-output predictive control method that uses an ARX model to characterize the system and to design the controller. Although the method can accommodate both embedded (implicit) and explicit feedforward paths for incorporation of disturbance effects, only the case of embedded feedforward in which the disturbances are assumed to be unknown is considered here. Results from numerical simulations using mathematical models of both a free-free three-degree-of-freedom mass-spring-dashpot system and the XV-15 tiltrotor research aircraft are presented. In regulation mode operation, which calls for zero system response in the presence of disturbances, the simulations showed reductions of nearly 100%. In tracking mode operations, where the system is commanded to follow a specified path, the GPC controllers produced the desired responses, even in the presence of disturbances.

  10. A comparison of two closely-related approaches to aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Shubin, G. R.; Frank, P. D.

    1991-01-01

    Two related methods for aerodynamic design optimization are compared. The methods, called the implicit gradient approach and the variational (or optimal control) approach, both attempt to obtain gradients necessary for numerical optimization at a cost significantly less than that of the usual black-box approach that employs finite difference gradients. While the two methods are seemingly quite different, they are shown to differ (essentially) in that the order of discretizing the continuous problem, and of applying calculus, is interchanged. Under certain circumstances, the two methods turn out to be identical. We explore the relationship between these methods by applying them to a model problem for duct flow that has many features in common with transonic flow over an airfoil. We find that the gradients computed by the variational method can sometimes be sufficiently inaccurate to cause the optimization to fail.

  11. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  12. Space-time adaptive ADER-DG schemes for dissipative flows: Compressible Navier-Stokes and resistive MHD equations

    NASA Astrophysics Data System (ADS)

    Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo

    2017-11-01

    This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume scheme on the subgrid averages within that troubled cell. Finally, a high order DG polynomial is reconstructed back from the evolved subcell averages. We apply the whole approach for the first time to the equations of compressible gas dynamics and magnetohydrodynamics in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme standout against a wide number of non-trivial test cases both for the compressible Navier-Stokes and the viscous and resistive magnetohydrodynamics equations. The present results show clearly that the shock-capturing capability of the news schemes is significantly enhanced within a cell-by-cell Adaptive Mesh Refinement (AMR) implementation together with time accurate local time stepping (LTS).

  13. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  14. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  15. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  16. Numerical modeling of the tensile strength of a biological granular aggregate: Effect of the particle size distribution

    NASA Astrophysics Data System (ADS)

    Heinze, Karsta; Frank, Xavier; Lullien-Pellerin, Valérie; George, Matthieu; Radjai, Farhang; Delenne, Jean-Yves

    2017-06-01

    Wheat grains can be considered as a natural cemented granular material. They are milled under high forces to produce food products such as flour. The major part of the grain is the so-called starchy endosperm. It contains stiff starch granules, which show a multi-modal size distribution, and a softer protein matrix that surrounds the granules. Experimental milling studies and numerical simulations are going hand in hand to better understand the fragmentation behavior of this biological material and to improve milling performance. We present a numerical study of the effect of granule size distribution on the strength of such a cemented granular material. Samples of bi-modal starch granule size distribution were created and submitted to uniaxial tension, using a peridynamics method. We show that, when compared to the effects of starch-protein interface adhesion and voids, the granule size distribution has a limited effect on the samples' yield stress.

  17. Solving ODE Initial Value Problems With Implicit Taylor Series Methods

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2000-01-01

    In this paper we introduce a new class of numerical methods for integrating ODE initial value problems. Specifically, we propose an extension of the Taylor series method which significantly improves its accuracy and stability while also increasing its range of applicability. To advance the solution from t (sub n) to t (sub n+1), we expand a series about the intermediate point t (sub n+mu):=t (sub n) + mu h, where h is the stepsize and mu is an arbitrary parameter called an expansion coefficient. We show that, in general, a Taylor series of degree k has exactly k expansion coefficients which raise its order of accuracy. The accuracy is raised by one order if k is odd, and by two orders if k is even. In addition, if k is three or greater, local extrapolation can be used to raise the accuracy two additional orders. We also examine stability for the problem y'= lambda y, Re (lambda) less than 0, and identify several A-stable schemes. Numerical results are presented for both fixed and variable stepsizes. It is shown that implicit Taylor series methods provide an effective integration tool for most problems, including stiff systems and ODE's with a singular point.

  18. Modal element method for potential flow in non-uniform ducts: Combining closed form analysis with CFD

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Baumeister, Joseph F.

    1994-01-01

    An analytical procedure is presented, called the modal element method, that combines numerical grid based algorithms with eigenfunction expansions developed by separation of variables. A modal element method is presented for solving potential flow in a channel with two-dimensional cylindrical like obstacles. The infinite computational region is divided into three subdomains; the bounded finite element domain, which is characterized by the cylindrical obstacle and the surrounding unbounded uniform channel entrance and exit domains. The velocity potential is represented approximately in the grid based domain by a finite element solution and is represented analytically by an eigenfunction expansion in the uniform semi-infinite entrance and exit domains. The calculated flow fields are in excellent agreement with exact analytical solutions. By eliminating the grid surrounding the obstacle, the modal element method reduces the numerical grid size, employs a more precise far field boundary condition, as well as giving theoretical insight to the interaction of the obstacle with the mean flow. Although the analysis focuses on a specific geometry, the formulation is general and can be applied to a variety of problems as seen by a comparison to companion theories in aeroacoustics and electromagnetics.

  19. Characterization of pore structure in cement-based materials using pressurization-depressurization cycling mercury intrusion porosimetry (PDC-MIP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jian, E-mail: Jian.Zhou@tudelft.n; Ye Guang, E-mail: g.ye@tudelft.n; Magnel Laboratory for Concrete Research, Department of Structural Engineering, Ghent University, Technologiepark-Zwijnaarde 904 B-9052, Ghent

    2010-07-15

    Numerous mercury intrusion porosimetry (MIP) studies have been carried out to investigate the pore structure in cement-based materials. However, the standard MIP often results in an underestimation of large pores and an overestimation of small pores because of its intrinsic limitation. In this paper, an innovative MIP method is developed in order to provide a more accurate estimation of pore size distribution. The new MIP measurements are conducted following a unique mercury intrusion procedure, in which the applied pressure is increased from the minimum to the maximum by repeating pressurization-depressurization cycles instead of a continuous pressurization followed by a continuousmore » depressurization. Accordingly, this method is called pressurization-depressurization cycling MIP (PDC-MIP). By following the PDC-MIP testing sequence, the volumes of the throat pores and the corresponding ink-bottle pores can be determined at every pore size. These values are used to calculate pore size distribution by using the newly developed analysis method. This paper presents an application of PDC-MIP on the investigation of the pore size distribution in cement-based materials. The experimental results of PDC-MIP are compared with those measured by standard MIP. The PDC-MIP is further validated with the other experimental methods and numerical tool, including nitrogen sorption, backscanning electron (BSE) image analysis, Wood's metal intrusion porosimetry (WMIP) and the numerical simulation by the cement hydration model HYMOSTRUC3D.« less

  20. Cooperative quantum-behaved particle swarm optimization with dynamic varying search areas and Lévy flight disturbance.

    PubMed

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.

  1. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  2. Numerical study of effect of the gas-coolant free surface on the droplet fragmentation behavior of coolants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H.X.; Anh, B.V.; Dinh, T.N.

    1999-07-01

    This paper presents results of a numerical investigation on the behavior of melt drops falling in a gas (vapor) space and then penetrating into a liquid volume through the gas-liquid interface. The phenomenon studied here is, usually, observed when a liquid drop falls through air into a water pool and is, specially, of interest when a hypothetical severe reactor core meltdown accident is considered. The objective of this work is to study the effect of the gas-liquid interface on the dynamic evolution of the interaction area between the fragmenting melt drop and water. In the present study, the Navier-Stokes equationsmore » are solved for three phases (gas, liquid and melt-drop) using a higher-order, explicit, numerical method, called Cubic-Interpolated Pseudo-Particle (CIP) method, which is employed in combination with an advanced front-capturing scheme, named the Level Set Algorithm (LSA). By using this method, reasonable physical pictures of droplet deformation and fragmentation during movement in a stationary uniform water pool, and in a gas-liquid two-layer volume, is simulated. Effect of the gas-liquid interface on the drop deformation and fragmentation is analyzed by comparing the simulation results obtained for the two cases. Effects of the drop geometry, and of the flow conditions, on the behavior of the melt drop are also analyzed.« less

  3. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  4. Clustangles: An Open Library for Clustering Angular Data.

    PubMed

    Sargsyan, Karen; Hua, Yun Hao; Lim, Carmay

    2015-08-24

    Dihedral angles are good descriptors of the numerous conformations visited by large, flexible systems, but their analysis requires directional statistics. A single package including the various multivariate statistical methods for angular data that accounts for the distinct topology of such data does not exist. Here, we present a lightweight standalone, operating-system independent package called Clustangles to fill this gap. Clustangles will be useful in analyzing the ever-increasing number of structures in the Protein Data Bank and clustering the copious conformations from increasingly long molecular dynamics simulations.

  5. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  6. Numerical Analyses of Subsoil-structure Interaction in Original Non-commercial Software based on FEM

    NASA Astrophysics Data System (ADS)

    Cajka, R.; Vaskova, J.; Vasek, J.

    2018-04-01

    For decades attention has been paid to interaction of foundation structures and subsoil and development of interaction models. Given that analytical solutions of subsoil-structure interaction could be deduced only for some simple shapes of load, analytical solutions are increasingly being replaced by numerical solutions (eg. FEM – Finite element method). Numerical analyses provides greater possibilities for taking into account the real factors involved in the subsoil-structure interaction and was also used in this article. This makes it possible to design the foundation structures more efficiently and still reliably and securely. Currently there are several software that, can deal with the interaction of foundations and subsoil. It has been demonstrated that non-commercial software called MKPINTER (created by Cajka) provides appropriately results close to actual measured values. In MKPINTER software stress-strain analysis of elastic half-space by means of Gauss numerical integration and Jacobean of transformation is done. Input data for numerical analysis were observed by experimental loading test of concrete slab. The loading was performed using unique experimental equipment which was constructed in the area Faculty of Civil Engineering, VŠB-TU Ostrava. The purpose of this paper is to compare resulting deformation of the slab with values observed during experimental loading test.

  7. Spatiotemporal variability and sound characterization in silver croaker Plagioscion squamosissimus (Sciaenidae) in the Central Amazon.

    PubMed

    Borie, Alfredo; Mok, Hin-Kiu; Chao, Ning L; Fine, Michael L

    2014-01-01

    The fish family Sciaenidae has numerous species that produce sounds with superfast muscles that vibrate the swimbladder. These muscles form post embryonically and undergo seasonal hypertrophy-atrophy cycles. The family has been the focus of numerous passive acoustic studies to localize spatial and temporal occurrence of spawning aggregations. Fishes produce disturbance calls when hand-held, and males form aggregations in late afternoon and produce advertisement calls to attract females for mating. Previous studies on five continents have been confined to temperate species. Here we examine the calls of the silver croaker Plagioscion squamosissimus, a freshwater equatorial species, which experiences constant photoperiod, minimal temperature variation but seasonal changes in water depth and color, pH and conductivity. Dissections indicate that sonic muscles are present exclusively in males and that muscles are thicker and redder during the mating season. Disturbance calls were recorded in hand-held fish during the low-water mating season and high-water period outside of the mating season. Advertisement calls were recorded from wild fish that formed aggregations in both periods but only during the mating season from fish in large cages. Disturbance calls consist of a series of short individual pulses in mature males. Advertisement calls start with single and paired pulses followed by greater amplitude multi-pulse bursts with higher peak frequencies than in disturbance calls. Advertisement-like calls also occur in aggregations during the off season, but bursts are shorter with fewer pulses. Silver croaker produce complex advertisement calls that vary in amplitude, number of cycles per burst and burst duration of their calls. Unlike temperate sciaenids, which only call during the spawning season, silver croaker produce advertisement calls in both seasons. Sonic muscles are thinner, and bursts are shorter than at the spawning peak, but males still produce complex calls outside of the mating season.

  8. Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup

    NASA Astrophysics Data System (ADS)

    Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.

    2014-11-01

    We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.

  9. Generalized Differential Calculus and Applications to Optimization

    NASA Astrophysics Data System (ADS)

    Rector, Robert Blake Hayden

    This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.

  10. Inferring physical properties of galaxies from their emission-line spectra

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Gallerani, S.; Pallottini, A.

    2017-02-01

    We present a new approach based on Supervised Machine Learning algorithms to infer key physical properties of galaxies (density, metallicity, column density and ionization parameter) from their emission-line spectra. We introduce a numerical code (called GAME, GAlaxy Machine learning for Emission lines) implementing this method and test it extensively. GAME delivers excellent predictive performances, especially for estimates of metallicity and column densities. We compare GAME with the most widely used diagnostics (e.g. R23, [N II] λ6584/Hα indicators) showing that it provides much better accuracy and wider applicability range. GAME is particularly suitable for use in combination with Integral Field Unit spectroscopy, both for rest-frame optical/UV nebular lines and far-infrared/sub-millimeter lines arising from photodissociation regions. Finally, GAME can also be applied to the analysis of synthetic galaxy maps built from numerical simulations.

  11. Ancient Cosmology, superfine structure of the Universe and Anthropological Principle

    NASA Astrophysics Data System (ADS)

    Arakelyan, Hrant; Vardanyan, Susan

    2015-07-01

    The modern cosmology by its spirit, conception of the Big Bang is closer to the ancient cosmology, than to the cosmological paradigm of the XIX century. Repeating the speculations of the ancients, but using at the same time subtle mathematical methods and relying on the steadily accumulating empirical material, the modern theory tends to a quantitative description of nature, in which increasing role are playing the numerical ratios between the physical constants. The detailed analysis of the influence of the numerical values -- of physical quantities on the physical state of the universe revealed amazing relations called fine and hyperfine tuning. In order to explain, why the observable universe comes to be a certain set of interrelated fundamental parameters, in fact a speculative anthropic principle was proposed, which focuses on the fact of the existence of sentient beings.

  12. The MINERVA Software Development Process

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.

    2017-01-01

    This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.

  13. Generation of dark hollow beams by using a fractional radial Hilbert transform system

    NASA Astrophysics Data System (ADS)

    Xie, Qiansen; Zhao, Daomu

    2007-07-01

    The radial Hilbert transform has been extend to the fractional field, which could be called the fractional radial Hilbert transform (FRHT). Using edge-enhancement characteristics of this transform, we convert a Gaussian light beam into a variety of dark hollow beams (DHBs). Based on the fact that a hard-edged aperture can be expanded approximately as a finite sum of complex Gaussian functions, the analytical expression of a Gaussian beam passing through a FRHT system has been derived. As a numerical example, the properties of the DHBs with different fractional orders are illustrated graphically. The calculation results obtained by use of the analytical method and the integral method are also compared.

  14. Radiative Effects on a Free Convective MHD Flow past a Vertically Inclined Plate with with Heat Source and Sink

    NASA Astrophysics Data System (ADS)

    Sambath, P.; Pullepu, Bapuji; Kannan, R. M.

    2018-04-01

    The impact of thermal radiation on unsteady laminar free convective MHD flow of a incompressible viscous fluid passes through a vertically inclined plate under the persuade of heat source and sink is presented here.Plate surface is considered to have variable wall temperature. The fluid regarded as gray absorbing / emitting, but non dispersing medium. The periphery layer dimensionless equations that administer the flow are evaluated by a finite difference implicit method called Crank Nicolson method. Numerical solutions are carried out for velocity, temperature, local shear stress, heat transfer rate for various values of the parameters (Pr, λ, Δ M, Rd ) are presented.

  15. On the mathematical analysis of Ebola hemorrhagic fever: deathly infection disease in West African countries.

    PubMed

    Atangana, Abdon; Goufo, Emile Franc Doungmo

    2014-01-01

    For a given West African country, we constructed a model describing the spread of the deathly disease called Ebola hemorrhagic fever. The model was first constructed using the classical derivative and then converted to the generalized version using the beta-derivative. We studied in detail the endemic equilibrium points and provided the Eigen values associated using the Jacobian method. We furthered our investigation by solving the model numerically using an iteration method. The simulations were done in terms of time and beta. The study showed that, for small portion of infected individuals, the whole country could die out in a very short period of time in case there is not good prevention.

  16. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  17. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 4: Computer user's manual for UAAP turboprop aeroacoustic code

    NASA Astrophysics Data System (ADS)

    Menthe, R. W.; McColgan, C. J.; Ladden, R. M.

    1991-05-01

    The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.

  18. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 4: Computer user's manual for UAAP turboprop aeroacoustic code

    NASA Technical Reports Server (NTRS)

    Menthe, R. W.; Mccolgan, C. J.; Ladden, R. M.

    1991-01-01

    The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.

  19. Stochastic gradient ascent outperforms gamers in the Quantum Moves game

    NASA Astrophysics Data System (ADS)

    Sels, Dries

    2018-04-01

    In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.

  20. An arbitrary high-order Discontinuous Galerkin method for elastic waves on unstructured meshes - III. Viscoelastic attenuation

    NASA Astrophysics Data System (ADS)

    Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner

    2007-01-01

    We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.

  1. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... as follows: Normal modification Provisioned items order (reserved for exclusive use by the Air Force... supplementary number will be ARZ998, and on down as needed. (6) Each office authorized to issue modifications...) Modifications to calls or orders. Use a two position alpha-numeric suffix, known as a call or order modification...

  2. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... as follows: Normal modification Provisioned items order (reserved for exclusive use by the Air Force... supplementary number will be ARZ998, and on down as needed. (6) Each office authorized to issue modifications...) Modifications to calls or orders. Use a two position alpha-numeric suffix, known as a call or order modification...

  3. Enhancing the Design and Analysis of Flipped Learning Strategies

    ERIC Educational Resources Information Center

    Jenkins, Martin; Bokosmaty, Rena; Brown, Melanie; Browne, Chris; Gao, Qi; Hanson, Julie; Kupatadze, Ketevan

    2017-01-01

    There are numerous calls in the literature for research into the flipped learning approach to match the flood of popular media articles praising its impact on student learning and educational outcomes. This paper addresses those calls by proposing pedagogical strategies that promote active learning in "flipped" approaches and improved…

  4. Effect Sizes in Gifted Education Research

    ERIC Educational Resources Information Center

    Gentry, Marcia; Peters, Scott J.

    2009-01-01

    Recent calls for reporting and interpreting effect sizes have been numerous, with the 5th edition of the "Publication Manual of the American Psychological Association" (2001) calling for the inclusion of effect sizes to interpret quantitative findings. Many top journals have required that effect sizes accompany claims of statistical significance.…

  5. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  6. Joint multifractal analysis based on wavelet leaders

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Yang, Yan-Hong; Wang, Gang-Jin; Zhou, Wei-Xing

    2017-12-01

    Mutually interacting components form complex systems and these components usually have long-range cross-correlated outputs. Using wavelet leaders, we propose a method for characterizing the joint multifractal nature of these long-range cross correlations; we call this method joint multifractal analysis based on wavelet leaders (MF-X-WL). We test the validity of the MF-X-WL method by performing extensive numerical experiments on dual binomial measures with multifractal cross correlations and bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. Both experiments indicate that MF-X-WL is capable of detecting cross correlations in synthetic data with acceptable estimating errors. We also apply the MF-X-WL method to pairs of series from financial markets (returns and volatilities) and online worlds (online numbers of different genders and different societies) and determine intriguing joint multifractal behavior.

  7. Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation

    NASA Astrophysics Data System (ADS)

    Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla

    2014-07-01

    Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.

  8. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    NASA Astrophysics Data System (ADS)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.

  9. Multi-beamlet investigation of the deflection compensation methods of SPIDER beamlets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baltador, C., E-mail: carlo.baltador@igi.cnr.it; Veltri, P.; Agostinetti, P.

    2016-02-15

    SPIDER (Source for Production of Ions of Deuterium Extracted from a Rf plasma) is an ion source test bed designed to extract and accelerate a negative ion current up to 40 A and 100 kV whose first beam is expected by the end of 2016. Two main effects perturb beamlet optics during the acceleration stage: space charge repulsion and the deflection induced by the permanent magnets (called co-extracted electron suppression magnets) embedded in the EG. The purpose of this work is to evaluate and compare benefits, collateral effects, and limitations of electrical and magnetic compensation methods for beamlet deflection. Themore » study of these methods has been carried out by means of numerical modeling tools: multi-beamlet simulations have been performed for the first time.« less

  10. Multi-beamlet investigation of the deflection compensation methods of SPIDER beamlets

    NASA Astrophysics Data System (ADS)

    Baltador, C.; Veltri, P.; Agostinetti, P.; Chitarin, G.; Serianni, G.

    2016-02-01

    SPIDER (Source for Production of Ions of Deuterium Extracted from a Rf plasma) is an ion source test bed designed to extract and accelerate a negative ion current up to 40 A and 100 kV whose first beam is expected by the end of 2016. Two main effects perturb beamlet optics during the acceleration stage: space charge repulsion and the deflection induced by the permanent magnets (called co-extracted electron suppression magnets) embedded in the EG. The purpose of this work is to evaluate and compare benefits, collateral effects, and limitations of electrical and magnetic compensation methods for beamlet deflection. The study of these methods has been carried out by means of numerical modeling tools: multi-beamlet simulations have been performed for the first time.

  11. Cooperative Quantum-Behaved Particle Swarm Optimization with Dynamic Varying Search Areas and Lévy Flight Disturbance

    PubMed Central

    Li, Desheng

    2014-01-01

    This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085

  12. Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.

    PubMed

    Pickett, Andrew C; Valdez, Danny; Barry, Adam E

    2017-09-01

    Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.

  13. Tempered fractional calculus

    NASA Astrophysics Data System (ADS)

    Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua

    2015-07-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  14. TEMPERED FRACTIONAL CALCULUS.

    PubMed

    Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.

  15. TEMPERED FRACTIONAL CALCULUS

    PubMed Central

    MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA

    2014-01-01

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690

  16. Tempered fractional calculus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com

    2015-07-15

    Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less

  17. A three-dimensional numerical simulation of cell behavior in a flow chamber based on fluid-solid interaction.

    PubMed

    Bai, Long; Cui, Yuhong; Zhang, Yixia; Zhao, Na

    2014-01-01

    The mechanical behavior of blood cells in the vessels has a close relationship with the physical characteristics of the blood and the cells. In this paper, a numerical simulation method was proposed to understand a single-blood cell's behavior in the vessels based on fluid-solid interaction method, which was conducted under adaptive time step and fixed time step, respectively. The main programme was C++ codes, which called FLUENT and ANSYS software, and UDF and APDL acted as a messenger to connect FLUENT and ANSYS for exchanging data. The computing results show: (1) the blood cell moved towards the bottom of the flow chamber in the beginning due to the influence of gravity, then it began to jump up when reached a certain height rather than touching the bottom. It could move downwards again after jump up, the blood cell could keep this way of moving like dancing continuously in the vessels; (2) the blood cell was rolling and deforming all the time; the rotation had oscillatory changes and the deformation became conspicuously when the blood cell was dancing. This new simulation method and results can be widely used in the researches of cytology, blood, cells, etc.

  18. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    NASA Astrophysics Data System (ADS)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  19. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  20. Sheet metals characterization using the virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2018-05-01

    In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.

  1. 17 CFR 17.00 - Information to be furnished by futures commission merchants, clearing members and foreign brokers.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 2 AN Exchange Code. 30 1 AN Put or Call. 31 5 AN Commodity Code (1). 36 8 AN Expiration Date (1). 44... Commodity Code (2). 71 8 AN Expiration Date (2). 79 2 Reserved. 80 1 AN Record Type. 1 AN—Alpha—numeric, N—Numeric, S—Signed numeric. (2) Field definitions are as follows: (i) Report type. This report format will...

  2. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  3. Application of a quick-freezing and deep-etching method to pathological diagnosis: a case of elastofibroma.

    PubMed

    Hemmi, Akihiro; Tabata, Masahiko; Homma, Taku; Ohno, Nobuhiko; Terada, Nobuo; Fujii, Yasuhisa; Ohno, Shinichi; Nemoto, Norimichi

    2006-04-01

    A case of elastofibroma in a middle-aged Japanese woman was examined by the quick-freezing and deep-etching (QF-DE) method, as well as by immunohistochemistry and conventional electron microscopy. The slowly growing tumor developed at the right scapular region and was composed of fibrous connective tissue with unique elastic materials called elastofibroma fibers. A normal elastic fiber consists of a central core and peripheral zone, in which the latter has small aggregates of 10 nm microfibrils. By the QF-DE method, globular structures consisting of numerous fibrils (5-20 nm in width) were observed between the collagen bundles. We could confirm that they were microfibril-rich peripheral zones of elastofibroma fibers by comparing the replica membrane and conventional electron microscopy. One of the characteristics of elastofibroma fibers is that they are assumed to contain numerous microfibrils. Immunohistochemically, spindle tumor cells showed positive immunoreaction for vimentin, whereas alpha-smooth muscle actin, desmin, S-100 protein and CD34 showed negative immunoreaction. By conventional electron microscopy, the tumor cell had thin cytoplasmic processes, pinocytotic vesicles and prominent rough endoplasmic reticulum. Abundant intracytoplasmic filaments were observed in some tumor cells. Thick lamina-like structures along with their inner nuclear membrane were often observed in the tumor cell nuclei. The whole image of the tumor cell was considered to be a periosteal-derived cell, which would produce numerous microfibrils in the peripheral zone of elastofibroma fibers. This study indicated that the QF-DE method could be applied to the pathological diagnosis and analysis of pathomechanism, even for surgical specimens obtained from a patient.

  4. A Case of Reform: The Undergraduate Research Collaboratives

    ERIC Educational Resources Information Center

    Horsch, Elizabeth; St. John, Mark; Christensen, Ronald L.

    2012-01-01

    Despite numerous calls for reform, the early chemistry experience for most college students has remained unchanged for decades. In 2004 the National Science Foundation (NSF) issued a call for proposals to create new models of chemical education that would infuse authentic research into the early stages of a student's college experience. Under this…

  5. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  6. Biclustering Learning of Trading Rules.

    PubMed

    Huang, Qinghua; Wang, Ting; Tao, Dacheng; Li, Xuelong

    2015-10-01

    Technical analysis with numerous indicators and patterns has been regarded as important evidence for making trading decisions in financial markets. However, it is extremely difficult for investors to find useful trading rules based on numerous technical indicators. This paper innovatively proposes the use of biclustering mining to discover effective technical trading patterns that contain a combination of indicators from historical financial data series. This is the first attempt to use biclustering algorithm on trading data. The mined patterns are regarded as trading rules and can be classified as three trading actions (i.e., the buy, the sell, and no-action signals) with respect to the maximum support. A modified K nearest neighborhood ( K -NN) method is applied to classification of trading days in the testing period. The proposed method [called biclustering algorithm and the K nearest neighbor (BIC- K -NN)] was implemented on four historical datasets and the average performance was compared with the conventional buy-and-hold strategy and three previously reported intelligent trading systems. Experimental results demonstrate that the proposed trading system outperforms its counterparts and will be useful for investment in various financial markets.

  7. Numerical modelling of the flow in the resin infusion process on the REV scale: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jabbari, M.; Spangenberg, J.; Hattel, J. H.

    2016-06-08

    The resin infusion process (RIP) has developed as a low cost method for manufacturing large fibre reinforced plastic parts. However, the process still presents some challenges to industry with regards to reliability and repeatability, resulting in expensive and inefficient trial and error development. In this paper, we show the implementation of 2D numerical models for the RIP using the open source simulator DuMu{sup X}. The idea of this study is to present a model which accounts for the interfacial forces coming from the capillary pressure on the so-called representative elementary volume (REV) scale. The model is described in detail andmore » three different test cases — a constant and a tensorial permeability as well as a preform/Balsa domain — are investigated. The results show that the developed model is very applicable for the RIP for manufacturing of composite parts. The idea behind this study is to test the developed model for later use in a real application, in which the preform medium has numerous layers with different material properties.« less

  8. Modelling groundwater fractal flow with fractional differentiation via Mittag-Leffler law

    NASA Astrophysics Data System (ADS)

    Ahokposi, D. P.; Atangana, Abdon; Vermeulen, D. P.

    2017-04-01

    Modelling the flow of groundwater within a network of fractures is perhaps one of the most difficult exercises within the field of geohydrology. This physical problem has attracted the attention of several scientists across the globe. Already two different types of differentiations have been used to attempt modelling this problem including the classical and the fractional differentiation. In this paper, we employed the most recent concept of differentiation based on the non-local and non-singular kernel called the generalized Mittag-Leffler function, to reshape the model of groundwater fractal flow. We presented the existence of positive solution of the new model. Using the fixed-point approach, we established the uniqueness of the positive solution. We solve the new model with three different numerical schemes including implicit, explicit and Crank-Nicholson numerical methods. Experimental data collected from four constant discharge tests conducted in a typical fractured crystalline rock aquifer of the Northern Limb (Bushveld Complex) in the Limpopo Province (South Africa) are compared with the numerical solutions. It is worth noting that the four boreholes (BPAC1, BPAC2, BPAC3, and BPAC4) are located on Faults.

  9. A fictitious domain method for fluid/solid interaction applied to the plate folding over the 660 Km depth boundary.

    NASA Astrophysics Data System (ADS)

    Cerpa, Nestor; Hassani, Riad; Gerbault, Muriel

    2014-05-01

    A large variety of geodynamical problems involve a mechanical system where a competent body is embedded in a more deformable medium, and hence they can be viewed as belonging to the field of solid/fluid interaction.The lithosphere/asthenosphere interaction in subduction zones is among those kind of problems which are generally difficult to tackle numerically since the immersed (solid) body can be geometrically complex and the surrounding (fluid) medium can thus undergo large deformation. Our work presents a new numerical approach for the study of subduction zones. The lithosphere is modeled as a Maxwell viscoelastic body sinking in the viscous asthenosphere. Both domains are discretized by the Finite Element Method (FEM) and we use a staggered coupling method. The interaction is provided by a non-matching interface method called the Fictitious Domain Method (FDM). We have validated this method with some 2-D benchmarks and examples. Through this numerical coupling method we aim at studying the effect of mantle viscosity on the cyclicity of slab folding on the 660 km depth discontinuity approximated as an impenetrable barrier. Depending on the kinematics condition imposed to the overriding and subducting plates, analog and numerical models have previously shown that cyclicity occurs. The viscosity of the asthenosphere (taken as an isoviscous or a double viscosity-layer fluid) impacts on folding cyclicity and consequently on the slab's dip as well as the stress regime of the overriding plate. In particular, applying far-field plate velocities corresponding to those of the South-American and Nazca plates at present, (4.3 cm/yr and 2.9 cm/yr respectively), we obtain periodic slab folding which is consistent with magmatism and sedimentalogical records. These data report cycles in orogenic growth of the order of 30-40 Myrs, a period that we reproduce when the mantle viscosity ranges in between 3 and 5 x 1020 Pa.s. Moreover, we reproduce episodic development of horizontal subduction induced by cyclic folding and, hence, propose a new explanation for episodes of flat subduction under the South-American plate. We show also preliminary results of 3-D subduction.

  10. A parallel time integrator for noisy nonlinear oscillatory systems

    NASA Astrophysics Data System (ADS)

    Subber, Waad; Sarkar, Abhijit

    2018-06-01

    In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).

  11. Trojans in habitable zones.

    PubMed

    Schwarz, Richard; Pilat-Lohinger, Elke; Dvorak, Rudolf; Erdi, Balint; Sándor, Zsolt

    2005-10-01

    With the aid of numerical experiments we examined the dynamical stability of fictitious terrestrial planets in 1:1 mean motion resonance with Jovian-like planets of extrasolar planetary systems. In our stability study of the so-called "Trojan" planets in the habitable zone, we used the restricted three-body problem with different mass ratios of the primary bodies. The application of the three-body problem showed that even massive Trojan planets can be stable in the 1:1 mean motion resonance. From the 117 extrasolar planetary systems only 11 systems were found with one giant planet in the habitable zone. Out of this sample set we chose four planetary systems--HD17051, HD27442, HD28185, and HD108874--for further investigation. To study the orbital behavior of the stable zone in the different systems, we used direct numerical computations (Lie Integration Method) that allowed us to determine the escape times and the maximum eccentricity of the fictitious "Trojan planets."

  12. On the Huygens absorbing boundary conditions for electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berenger, Jean-Pierre

    A new absorbing boundary condition (ABC) is presented for the solution of Maxwell equations in unbounded spaces. Called the Huygens ABC, this condition is a generalization of two previously published ABCs, namely the multiple absorbing surfaces (MAS) and the re-radiating boundary condition (rRBC). The properties of the Huygens ABC are derived theoretically in continuous spaces and in the finite-difference (FDTD) discretized space. A solution is proposed to render the Huygens ABC effective for the absorption of evanescent waves. Numerical experiments with the FDTD method show that the effectiveness of the Huygens ABC is close to that of the PML ABCmore » in some realistic problems of numerical electromagnetics. It is also shown in the paper that a combination of the Huygens ABC with the PML ABC is very well suited to the solution of some particular problems.« less

  13. Numerical Analysis of the Cavity Flow subjected to Passive Controls Techniques

    NASA Astrophysics Data System (ADS)

    Melih Guleren, Kursad; Turk, Seyfettin; Mirza Demircan, Osman; Demir, Oguzhan

    2018-03-01

    Open-source flow solvers are getting more and more popular for the analysis of challenging flow problems in aeronautical and mechanical engineering applications. They are offered under the GNU General Public License and can be run, examined, shared and modified according to user’s requirements. SU2 and OpenFOAM are the two most popular open-source solvers in Computational Fluid Dynamics (CFD) community. In the present study, some passive control methods on the high-speed cavity flows are numerically simulated using these open-source flow solvers along with one commercial flow solver called ANSYS/Fluent. The results are compared with the available experimental data. The solver SU2 are seen to predict satisfactory the mean streamline velocity but not turbulent kinetic energy and overall averaged sound pressure level (OASPL). Whereas OpenFOAM predicts all these parameters nearly as the same levels of ANSYS/Fluent.

  14. Some analytical and numerical approaches to understanding trap counts resulting from pest insect immigration.

    PubMed

    Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei

    2015-05-01

    Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Topology Optimization - Engineering Contribution to Architectural Design

    NASA Astrophysics Data System (ADS)

    Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2017-10-01

    The idea of the topology optimization is to find within a considered design domain the distribution of material that is optimal in some sense. Material, during optimization process, is redistributed and parts that are not necessary from objective point of view are removed. The result is a solid/void structure, for which an objective function is minimized. This paper presents an application of topology optimization to multi-material structures. The design domain defined by shape of a structure is divided into sub-regions, for which different materials are assigned. During design process material is relocated, but only within selected region. The proposed idea has been inspired by architectural designs like multi-material facades of buildings. The effectiveness of topology optimization is determined by proper choice of numerical optimization algorithm. This paper utilises very efficient heuristic method called Cellular Automata. Cellular Automata are mathematical, discrete idealization of a physical systems. Engineering implementation of Cellular Automata requires decomposition of the design domain into a uniform lattice of cells. It is assumed, that the interaction between cells takes place only within the neighbouring cells. The interaction is governed by simple, local update rules, which are based on heuristics or physical laws. The numerical studies show, that this method can be attractive alternative to traditional gradient-based algorithms. The proposed approach is evaluated by selected numerical examples of multi-material bridge structures, for which various material configurations are examined. The numerical studies demonstrated a significant influence the material sub-regions location on the final topologies. The influence of assumed volume fraction on final topologies for multi-material structures is also observed and discussed. The results of numerical calculations show, that this approach produces different results as compared with classical one-material problems.

  16. Optical properties of electrohydrodynamic convection patterns: rigorous and approximate methods.

    PubMed

    Bohley, Christian; Heuer, Jana; Stannarius, Ralf

    2005-12-01

    We analyze the optical behavior of two-dimensionally periodic structures that occur in electrohydrodynamic convection (EHC) patterns in nematic sandwich cells. These structures are anisotropic, locally uniaxial, and periodic on the scale of micrometers. For the first time, the optics of these structures is investigated with a rigorous method. The method used for the description of the electromagnetic waves interacting with EHC director patterns is a numerical approach that discretizes directly the Maxwell equations. It works as a space-grid-time-domain method and computes electric and magnetic fields in time steps. This so-called finite-difference-time-domain (FDTD) method is able to generate the fields with arbitrary accuracy. We compare this rigorous method with earlier attempts based on ray-tracing and analytical approximations. Results of optical studies of EHC structures made earlier based on ray-tracing methods are confirmed for thin cells, when the spatial periods of the pattern are sufficiently large. For the treatment of small-scale convection structures, the FDTD method is without alternatives.

  17. Numerical Analysis of the Dynamics of Nonlinear Solids and Structures

    DTIC Science & Technology

    2008-08-01

    to arrive to a new numerical scheme that exhibits rigorously the dissipative character of the so-called canonical free en - ergy characteristic of...UCLA), February 14 2006. 5. "Numerical Integration of the Nonlinear Dynamics of Elastoplastic Solids," keynote lecture , 3rd European Conference on...Computational Mechanics (ECCM 3), Lisbon, Portugal, June 5-9 2006. 6. "Energy-Momentum Schemes for Finite Strain Plasticity," keynote lecture , 7th

  18. Comparison of theory and direct numerical simulations of drag reduction by rodlike polymers in turbulent channel flows.

    PubMed

    Benzi, Roberto; Ching, Emily S C; De Angelis, Elisabetta; Procaccia, Itamar

    2008-04-01

    Numerical simulations of turbulent channel flows, with or without additives, are limited in the extent of the Reynolds number (Re) and Deborah number (De). The comparison of such simulations to theories of drag reduction, which are usually derived for asymptotically high Re and De, calls for some care. In this paper we present a study of drag reduction by rodlike polymers in a turbulent channel flow using direct numerical simulation and illustrate how these numerical results should be related to the recently developed theory.

  19. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  20. Clustering high dimensional data using RIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Nazrina

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily andmore » hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.« less

  1. A Differential Evolution Algorithm Based on Nikaido-Isoda Function for Solving Nash Equilibrium in Nonlinear Continuous Games

    PubMed Central

    He, Feng; Zhang, Wei; Zhang, Guoqiang

    2016-01-01

    A differential evolution algorithm for solving Nash equilibrium in nonlinear continuous games is presented in this paper, called NIDE (Nikaido-Isoda differential evolution). At each generation, parent and child strategy profiles are compared one by one pairwisely, adapting Nikaido-Isoda function as fitness function. In practice, the NE of nonlinear game model with cubic cost function and quadratic demand function is solved, and this method could also be applied to non-concave payoff functions. Moreover, the NIDE is compared with the existing Nash Domination Evolutionary Multiplayer Optimization (NDEMO), the result showed that NIDE was significantly better than NDEMO with less iterations and shorter running time. These numerical examples suggested that the NIDE method is potentially useful. PMID:27589229

  2. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  3. PyMICE: APython library for analysis of IntelliCage data.

    PubMed

    Dzik, Jakub M; Puścian, Alicja; Mijakowska, Zofia; Radwanska, Kasia; Łęski, Szymon

    2018-04-01

    IntelliCage is an automated system for recording the behavior of a group of mice housed together. It produces rich, detailed behavioral data calling for new methods and software for their analysis. Here we present PyMICE, a free and open-source library for analysis of IntelliCage data in the Python programming language. We describe the design and demonstrate the use of the library through a series of examples. PyMICE provides easy and intuitive access to IntelliCage data, and thus facilitates the possibility of using numerous other Python scientific libraries to form a complete data analysis workflow.

  4. Responding to the recruiting call. How Children's Hospital of Philadelphia seeks healthcare pros.

    PubMed

    Botvin, J D

    2001-01-01

    The global shortage of healthcare professionals has touched every healthcare institution, and despite its renown, Children's Hospital of Philadelphia (CHOP) is no exception. In fact, the hospital's strategic addition of suburban facilities has exacerbated the situation. The recruitment plan builds on a strong new image campaign, and sharpens the focus on nurses, student nurses and allied health professionals. Direct mail is used--a first for recruitment at CHOP. The hospital's Web site is used as well as postings on numerous Internet job boards. Finally, the campaign also uses more traditional methods, such as print advertising.

  5. Piezothermal effect in a spinning gas

    NASA Astrophysics Data System (ADS)

    Geyko, V. I.; Fisch, N. J.

    2016-10-01

    A spinning gas, heated adiabatically through axial compression, is known to exhibit a rotation-dependent heat capacity. However, as equilibrium is approached, an effect is identified here wherein the temperature does not grow homogeneously in the radial direction, but develops a temperature differential with the hottest region on axis, at the maximum of the centrifugal potential energy. This phenomenon, which we call a piezothermal effect, is shown to grow bilinearly with the compression rate and the amplitude of the potential. Numerical simulations confirm a simple model of this effect, which can be generalized to other forms of potential energy and methods of heating.

  6. Segregated nodal domains of two-dimensional multispecies Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Chang, Shu-Ming; Lin, Chang-Shou; Lin, Tai-Chia; Lin, Wen-Wei

    2004-09-01

    In this paper, we study the distribution of m segregated nodal domains of the m-mixture of Bose-Einstein condensates under positive and large repulsive scattering lengths. It is shown that components of positive bound states may repel each other and form segregated nodal domains as the repulsive scattering lengths go to infinity. Efficient numerical schemes are created to confirm our theoretical results and discover a new phenomenon called verticillate multiplying, i.e., the generation of multiple verticillate structures. In addition, our proposed Gauss-Seidel-type iteration method is very effective in that it converges linearly in 10-20 steps.

  7. Automated procedures for sizing aerospace vehicle structures /SAVES/

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Blackburn, C. L.; Dixon, S. C.

    1972-01-01

    Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.

  8. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  9. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  10. 48 CFR 204.7004 - Supplementary PII numbers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... agreements using a six position alpha-numeric added to the basic PII number. (2) Position 1. Identify the...) Positions 2 through 3. These are the first two digits in a serial number. They may be either alpha or... orders issued by the office issuing the contract or agreement. Use a four position alpha-numeric call or...

  11. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Structural Noise and Acoustic Characteristics Improvement of Transport Power Plants

    NASA Astrophysics Data System (ADS)

    Chaynov, N. D.; Markov, V. A.; Savastenko, A. A.

    2018-03-01

    Noise reduction generated during the operation of various machines and mechanisms is an urgent task with regard to the power plants and, in particular, to internal combustion engines. Sound emission from the surfaces vibration of body parts is one of the main noise manifestations of the running engine and it is called a structural noise. The vibration defining of the outer surfaces of complex body parts and the calculation of their acoustic characteristics are determined with numerical methods. At the same time, realization of finite and boundary elements methods combination turned out to be very effective. The finite element method is used in calculating the structural elements vibrations, and the boundary elements method is used in the structural noise calculation. The main conditions of the methodology and the results of the structural noise analysis applied to a number of automobile engines are shown.

  13. Solid-perforated panel layout optimization by topology optimization based on unified transfer matrix.

    PubMed

    Kim, Yoon Jae; Kim, Yoon Young

    2010-10-01

    This paper presents a numerical method for the optimization of the sequencing of solid panels, perforated panels and air gaps and their respective thickness for maximizing sound transmission loss and/or absorption. For the optimization, a method based on the topology optimization formulation is proposed. It is difficult to employ only the commonly-used material interpolation technique because the involved layers exhibit fundamentally different acoustic behavior. Thus, an optimization method formulation using a so-called unified transfer matrix is newly proposed. The key idea is to form elements of the transfer matrix such that interpolated elements by the layer design variables can be those of air, perforated and solid panel layers. The problem related to the interpolation is addressed and bench mark-type problems such as sound transmission or absorption maximization problems are solved to check the efficiency of the developed method.

  14. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  15. Harmonic component detection: Optimized Spectral Kurtosis for operational modal analysis

    NASA Astrophysics Data System (ADS)

    Dion, J.-L.; Tawfiq, I.; Chevallier, G.

    2012-01-01

    This work is a contribution in the field of Operational Modal Analysis to identify the modal parameters of mechanical structures using only measured responses. The study deals with structural responses coupled with harmonic components amplitude and frequency modulated in a short range, a common combination for mechanical systems with engines and other rotating machines in operation. These harmonic components generate misleading data interpreted erroneously by the classical methods used in OMA. The present work attempts to differentiate maxima in spectra stemming from harmonic components and structural modes. The detection method proposed is based on the so-called Optimized Spectral Kurtosis and compared with others definitions of Spectral Kurtosis described in the literature. After a parametric study of the method, a critical study is performed on numerical simulations and then on an experimental structure in operation in order to assess the method's performance.

  16. Finite cover method with mortar elements for elastoplasticity problems

    NASA Astrophysics Data System (ADS)

    Kurumatani, M.; Terada, K.

    2005-06-01

    Finite cover method (FCM) is extended to elastoplasticity problems. The FCM, which was originally developed under the name of manifold method, has recently been recognized as one of the generalized versions of finite element methods (FEM). Since the mesh for the FCM can be regular and squared regardless of the geometry of structures to be analyzed, structural analysts are released from a burdensome task of generating meshes conforming to physical boundaries. Numerical experiments are carried out to assess the performance of the FCM with such discretization in elastoplasticity problems. Particularly to achieve this accurately, the so-called mortar elements are introduced to impose displacement boundary conditions on the essential boundaries, and displacement compatibility conditions on material interfaces of two-phase materials or on joint surfaces between mutually incompatible meshes. The validity of the mortar approximation is also demonstrated in the elastic-plastic FCM.

  17. Wavelets and distributed approximating functionals

    NASA Astrophysics Data System (ADS)

    Wei, G. W.; Kouri, D. J.; Hoffman, D. K.

    1998-07-01

    A general procedure is proposed for constructing father and mother wavelets that have excellent time-frequency localization and can be used to generate entire wavelet families for use as wavelet transforms. One interesting feature of our father wavelets (scaling functions) is that they belong to a class of generalized delta sequences, which we refer to as distributed approximating functionals (DAFs). We indicate this by the notation wavelet-DAFs. Correspondingly, the mother wavelets generated from these wavelet-DAFs are appropriately called DAF-wavelets. Wavelet-DAFs can be regarded as providing a pointwise (localized) spectral method, which furnishes a bridge between the traditional global methods and local methods for solving partial differential equations. They are shown to provide extremely accurate numerical solutions for a number of nonlinear partial differential equations, including the Korteweg-de Vries (KdV) equation, for which a previous method has encountered difficulties (J. Comput. Phys. 132 (1997) 233).

  18. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  19. Spectrum analysis on quality requirements consideration in software design documents.

    PubMed

    Kaiya, Haruhiko; Umemura, Masahiro; Ogata, Shinpei; Kaijiri, Kenji

    2013-12-01

    Software quality requirements defined in the requirements analysis stage should be implemented in the final products, such as source codes and system deployment. To guarantee this meta-requirement, quality requirements should be considered in the intermediate stages, such as the design stage or the architectural definition stage. We propose a novel method for checking whether quality requirements are considered in the design stage. In this method, a technique called "spectrum analysis for quality requirements" is applied not only to requirements specifications but also to design documents. The technique enables us to derive the spectrum of a document, and quality requirements considerations in the document are numerically represented in the spectrum. We can thus objectively identify whether the considerations of quality requirements in a requirements document are adapted to its design document. To validate the method, we applied it to commercial software systems with the help of a supporting tool, and we confirmed that the method worked well.

  20. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  1. Re-Gendering the Social Work Curriculum: New Realities and Complexities

    ERIC Educational Resources Information Center

    McPhail, Beverly A.

    2008-01-01

    With the advent of the 2nd wave of the women's movement, numerous voices within social work academia called for the inclusion of gendered content in the curriculum. The subsequent addition of content on women was a pivotal achievement for the social work profession. However, gender is an increasingly slippery concept. A current call for…

  2. The Urban School Reform Opera: The Obstructions to Transforming School Counseling Practices

    ERIC Educational Resources Information Center

    Militello, Matthew; Janson, Christopher

    2014-01-01

    Over the past 20 years, there have been numerous calls to reform the practices of school counselors. Some have situated these calls for school counseling reform within the context of urban schooling. This study examined the practices of school counselors in one urban school district, and how those practices aligned with the school district's…

  3. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  4. A method for selective excitation of Ince-Gaussian modes in an end-pumped solid-state laser

    NASA Astrophysics Data System (ADS)

    Lei, J.; Hu, A.; Wang, Y.; Chen, P.

    2014-12-01

    A method for selective excitation of Ince-Gaussian modes is presented. The method is based on the spatial distributions of Ince-Gaussian modes as well as the transverse mode selection theory. Significant diffraction loss is introduced in a resonator by using opaque lines at zero-intensity positions, and this loss allows to excite a specific mode; we call this method "loss control." We study the method by means of numerical simulation of a half-symmetric laser resonator. The simulated field is represented by angular spectrum of the plane waves representation, and its changes are calculated by the two-dimensional fast Fourier transform algorithm when it passes through the optical elements and propagates back and forth in the resonator. The output lasing modes of our method have an overlap of over 90 % with the target Ince-Gaussian modes. The method will be beneficial to the further study of properties and potential applications of Ince-Gaussian modes.

  5. The aggregated unfitted finite element method for elliptic problems

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.

    2018-07-01

    Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.

  6. Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations

    NASA Astrophysics Data System (ADS)

    Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane

    2018-04-01

    Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.

  7. Artificial Neural Identification and LMI Transformation for Model Reduction-Based Control of the Buck Switch-Mode Regulator

    NASA Astrophysics Data System (ADS)

    Al-Rabadi, Anas N.

    2009-10-01

    This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  9. Charge Transfer Inefficiency in Pinned Photodiode CMOS image sensors: Simple Montecarlo modeling and experimental measurement based on a pulsed storage-gate method

    NASA Astrophysics Data System (ADS)

    Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart

    2016-11-01

    The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.

  10. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  11. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    PubMed

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.

  12. Magnetospheric Whistler Mode Raytracing with the Inclusion of Finite Electron and ion Temperature

    NASA Astrophysics Data System (ADS)

    Maxworth, Ashanthi S.

    Whistler mode waves are a type of a low frequency (100 Hz - 30 kHz) wave, which exists only in a magnetized plasma. These waves play a major role in Earth's magnetosphere. Due to the impact of whistler mode waves in many fields such as space weather, satellite communications and lifetime of space electronics, it is important to accurately predict the propagation path of these waves. The method used to determine the propagation path of whistler waves is called numerical raytracing. Numerical raytracing determines the power flow path of the whistler mode waves by solving a set of equations known as the Haselgrove's equations. In the majority of the previous work, raytracing was implemented assuming a cold background plasma (0 K), but the actual magnetosphere is at a temperature of about 1 eV (11600 K). In this work we have modified the numerical raytracing algorithm to work at finite electron and ion temperatures. The finite temperature effects have also been introduced into the formulations for linear cyclotron resonance wave growth and Landau damping, which are the primary mechanisms for whistler mode growth and attenuation in the magnetosphere. Including temperature increases the complexity of numerical raytracing, but the overall effects are mostly limited to increasing the group velocity of the waves at highly oblique wave normal angles.

  13. The iFlow modelling framework v2.4: a modular idealized process-based model for flow and transport in estuaries

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.

    2017-07-01

    The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the underlying physics and the testing of parameter sensitivity. A comparison of the model results to measurements shows a good qualitative agreement. iFlow is written in Python and is available as open source code under the LGPL license.

  14. Steel Fibre Reinforced Concrete Simulation with the SPH Method

    NASA Astrophysics Data System (ADS)

    Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip

    2017-10-01

    Steel fibre reinforced concrete (SFRC) is very popular in many branches of civil engineering. Thanks to its increased ductility, it is able to resist various types of loading. When designing a structure, the mechanical behaviour of SFRC can be described by currently available material models (with equivalent material for example) and therefore no problems arise with numerical simulations. But in many scenarios, e.g. high speed loading, it would be a mistake to use such an equivalent material. Physical modelling of the steel fibres used in concrete is usually problematic, though. It is necessary to consider the fact that mesh-based methods are very unsuitable for high-speed simulations with regard to the issues that occur due to the effect of excessive mesh deformation. So-called meshfree methods are much more suitable for this purpose. The Smoothed Particle Hydrodynamics (SPH) method is currently the best choice, thanks to its advantages. However, a numerical defect known as tensile instability may appear when the SPH method is used. It causes the development of numerical (false) cracks, making simulations of ductile types of failure significantly more difficult to perform. The contribution therefore deals with the description of a procedure for avoiding this defect and successfully simulating the behaviour of SFRC with the SPH method. The essence of the problem lies in the choice of coordinates and the description of the integration domain derived from them - spatial (Eulerian kernel) or material coordinates (Lagrangian kernel). The contribution describes the behaviour of both formulations. Conclusions are drawn from the fundamental tasks, and the contribution additionally demonstrates the functionality of SFRC simulations. The random generation of steel fibres and their inclusion in simulations are also discussed. The functionality of the method is supported by the results of pressure test simulations which compare various levels of fibre reinforcement of SFRC specimens.

  15. Being Numerate: What Counts? A Fresh Look at the Basics.

    ERIC Educational Resources Information Center

    Willis, Sue, Ed.

    To be numerate is to be able to function mathematically in one's daily life. The kinds of mathematics skills and understandings necessary to function effectively in daily life are changing. Despite an awareness in Australia of new skills necessary for the information age and calls that the schools should be instrumental in preparing students with…

  16. DIATOM (Data Initialization and Modification) Library Version 7.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.

    DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less

  17. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  18. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    NASA Astrophysics Data System (ADS)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  19. The life of a meander bend: Connecting shape and dynamics via analysis of a numerical model

    NASA Astrophysics Data System (ADS)

    Schwenk, Jon; Lanzoni, Stefano; Foufoula-Georgiou, Efi

    2015-04-01

    Analysis of bend-scale meandering river dynamics is a problem of theoretical and practical interest. This work introduces a method for extracting and analyzing the history of individual meander bends from inception until cutoff (called "atoms") by tracking backward through time the set of two cutoff nodes in numerical meander migration models. Application of this method to a simplified yet physically based model provides access to previously unavailable bend-scale meander dynamics over long times and at high temporal resolutions. We find that before cutoffs, the intrinsic model dynamics invariably simulate a prototypical cutoff atom shape we dub simple. Once perturbations from cutoffs occur, two other archetypal cutoff planform shapes emerge called long and round that are distinguished by a stretching along their long and perpendicular axes, respectively. Three measures of meander migration—growth rate, average migration rate, and centroid migration rate—are introduced to capture the dynamic lives of individual bends and reveal that similar cutoff atom geometries share similar dynamic histories. Specifically, through the lens of the three shape types, simples are seen to have the highest growth and average migration rates, followed by rounds, and finally longs. Using the maximum average migration rate as a metric describing an atom's dynamic past, we show a strong connection between it and two metrics of cutoff geometry. This result suggests both that early formative dynamics may be inferred from static cutoff planforms and that there exists a critical period early in a meander bend's life when its dynamic trajectory is most sensitive to cutoff perturbations. An example of how these results could be applied to Mississippi River oxbow lakes with unknown historic dynamics is shown. The results characterize the underlying model and provide a framework for comparisons against more complex models and observed dynamics.

  20. Investigation of hydroelastic ship responses of an ULOC in head seas

    NASA Astrophysics Data System (ADS)

    Wang, Xue-liang; Temarel, Pandeli; Hu, Jia-jun; Gu, Xue-kang

    2016-10-01

    Investigation of hydroelastic ship responses has been brought to the attention of the scientific and engineering world for several decades. There are two kinds of high-frequency vibrations in general ship responses to a large ocean-going ship in its shipping line, so-called springing and whipping, which are important for the determination of design wave load and fatigue damage as well. Because of the huge scale of an ultra large ore carrier (ULOC), it will suffer seldom slamming events in the ocean. The resonance vibration with high frequency is springing, which is caused by continuous wave excitation. In this paper, the wave-induced vibrations of the ULOC are addressed by experimental and numerical methods according to 2D and 3D hydroelasticity theories and an elastic model under full-load and ballast conditions. The influence of loading conditions on high-frequency vibration is studied both by numerical and experimental results. Wave-induced vibrations are higher under ballast condition including the wave frequency part, the multiple frequencies part, the 2-node and the 3-node vertical bending parts of the hydroelastic responses. The predicted results from the 2D method have less accuracy than the 3D method especially under ballast condition because of the slender-body assumption in the former method. The applicability of the 2D method and the further development of nonlinear effects to 3D method in the prediction of hydroelastic responses of the ULOC are discussed.

  1. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  2. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  3. Active listening room compensation for massive multichannel sound reproduction systems using wave-domain adaptive filtering.

    PubMed

    Spors, Sascha; Buchner, Herbert; Rabenstein, Rudolf; Herbordt, Wolfgang

    2007-07-01

    The acoustic theory for multichannel sound reproduction systems usually assumes free-field conditions for the listening environment. However, their performance in real-world listening environments may be impaired by reflections at the walls. This impairment can be reduced by suitable compensation measures. For systems with many channels, active compensation is an option, since the compensating waves can be created by the reproduction loudspeakers. Due to the time-varying nature of room acoustics, the compensation signals have to be determined by an adaptive system. The problems associated with the successful operation of multichannel adaptive systems are addressed in this contribution. First, a method for decoupling the adaptation problem is introduced. It is based on a generalized singular value decomposition and is called eigenspace adaptive filtering. Unfortunately, it cannot be implemented in its pure form, since the continuous adaptation of the generalized singular value decomposition matrices to the variable room acoustics is numerically very demanding. However, a combination of this mathematical technique with the physical description of wave propagation yields a realizable multichannel adaptation method with good decoupling properties. It is called wave domain adaptive filtering and is discussed here in the context of wave field synthesis.

  4. Scientific Teaching: Defining a Taxonomy of Observable Practices

    PubMed Central

    Couch, Brian A.; Brown, Tanya L.; Schelpat, Tyler J.; Graham, Mark J.; Knight, Jennifer K.

    2015-01-01

    Over the past several decades, numerous reports have been published advocating for changes to undergraduate science education. These national calls inspired the formation of the National Academies Summer Institutes on Undergraduate Education in Biology (SI), a group of regional workshops to help faculty members learn and implement interactive teaching methods. The SI curriculum promotes a pedagogical framework called Scientific Teaching (ST), which aims to bring the vitality of modern research into the classroom by engaging students in the scientific discovery process and using student data to inform the ongoing development of teaching methods. With the spread of ST, the need emerges to systematically define its components in order to establish a common description for education researchers and practitioners. We describe the development of a taxonomy detailing ST’s core elements and provide data from classroom observations and faculty surveys in support of its applicability within undergraduate science courses. The final taxonomy consists of 15 pedagogical goals and 37 supporting practices, specifying observable behaviors, artifacts, and features associated with ST. This taxonomy will support future educational efforts by providing a framework for researchers studying the processes and outcomes of ST-based course transformations as well as a concise guide for faculty members developing classes. PMID:25713097

  5. A novel finite volume discretization method for advection-diffusion systems on stretched meshes

    NASA Astrophysics Data System (ADS)

    Merrick, D. G.; Malan, A. G.; van Rooyen, J. A.

    2018-06-01

    This work is concerned with spatial advection and diffusion discretization technology within the field of Computational Fluid Dynamics (CFD). In this context, a novel method is proposed, which is dubbed the Enhanced Taylor Advection-Diffusion (ETAD) scheme. The model equation employed for design of the scheme is the scalar advection-diffusion equation, the industrial application being incompressible laminar and turbulent flow. Developed to be implementable into finite volume codes, ETAD places specific emphasis on improving accuracy on stretched structured and unstructured meshes while considering both advection and diffusion aspects in a holistic manner. A vertex-centered structured and unstructured finite volume scheme is used, and only data available on either side of the volume face is employed. This includes the addition of a so-called mesh stretching metric. Additionally, non-linear blending with the existing NVSF scheme was performed in the interest of robustness and stability, particularly on equispaced meshes. The developed scheme is assessed in terms of accuracy - this is done analytically and numerically, via comparison to upwind methods which include the popular QUICK and CUI techniques. Numerical tests involved the 1D scalar advection-diffusion equation, a 2D lid driven cavity and turbulent flow case. Significant improvements in accuracy were achieved, with L2 error reductions of up to 75%.

  6. Numerical simulation of liquid-layer breakup on a moving wall due to an impinging jet

    NASA Astrophysics Data System (ADS)

    Yu, Taejong; Moon, Hojoon; You, Donghyun; Kim, Dokyun; Ovsyannikov, Andrey

    2014-11-01

    Jet wiping, which is a hydrodynamic method for controlling the liquid film thickness in coating processes, is constrained by a rather violent film instability called splashing. The instability is characterized by the ejection of droplets from the runback flow and results in an explosion of the film. The splashing phenomenon degrades the final coating quality. In the present research, a volume-of-fluid (VOF)-based method, which is developed at Cascade Technologies, is employed to simulate the air-liquid multiphase flow dynamics. The present numerical method is based on an unstructured-grid unsplit geometric VOF scheme and guarantees strict conservation of mass of two-phase flow, The simulation results are compared with experimental measurements such as the liquid-film thickness before and after the jet wiping, wall pressure and shear stress distributions. The trajectories of liquid droplets due to the fluid motion entrained by the gas-jet operation, are also qualitatively compared with experimental visualization. Physical phenomena observed during the liquid-layer breakup due to an impinging jet is characterized in order to develop ideas for controlling the liquid-layer instability and resulting splash generation and propagation. Supported by the Grant NRF-2012R1A1A2003699, the Brain Korea 21+ program, POSCO, and 2014 CTR Summer Program.

  7. Identification of species by multiplex analysis of variable-length sequences

    PubMed Central

    Pereira, Filipe; Carneiro, João; Matthiesen, Rune; van Asch, Barbara; Pinto, Nádia; Gusmão, Leonor; Amorim, António

    2010-01-01

    The quest for a universal and efficient method of identifying species has been a longstanding challenge in biology. Here, we show that accurate identification of species in all domains of life can be accomplished by multiplex analysis of variable-length sequences containing multiple insertion/deletion variants. The new method, called SPInDel, is able to discriminate 93.3% of eukaryotic species from 18 taxonomic groups. We also demonstrate that the identification of prokaryotic and viral species with numeric profiles of fragment lengths is generally straightforward. A computational platform is presented to facilitate the planning of projects and includes a large data set with nearly 1800 numeric profiles for species in all domains of life (1556 for eukaryotes, 105 for prokaryotes and 130 for viruses). Finally, a SPInDel profiling kit for discrimination of 10 mammalian species was successfully validated on highly processed food products with species mixtures and proved to be easily adaptable to multiple screening procedures routinely used in molecular biology laboratories. These results suggest that SPInDel is a reliable and cost-effective method for broad-spectrum species identification that is appropriate for use in suboptimal samples and is amenable to different high-throughput genotyping platforms without the need for DNA sequencing. PMID:20923781

  8. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  9. Etude analytique et numérique de la réponse en vibration à hautes fréquences d'éprouvettes de fatigue vibratoire des métaux. Application aux aciers

    NASA Astrophysics Data System (ADS)

    Ben Aich, A.; El Kihel, B.; Kifani, A.; Sahban, F.

    1994-07-01

    In the present paper, the so-called " ultrasonic fatigue " or fatigue at very high frequency has been studied in the materials elastic behaviour case while neglecting the thermal effects that influence the mechanical fields. The determination of mechanical fields and specimen resonance length has been done both analytically and numerically. The numerical method used for this calculation is the finite element method (FEM). Martensitic steel " Soleil A2 " and austenitic steel " ICL 472 BC " have been considered in order to compare the two methods (analytical and numerical). It is shown that a perfect convergence is obtained between the two solutions. Dans le présent travail, la fatigue vibratoire a été étudiée dans le cas du comportement élastique des matériaux en négligeant les effets thermiques pouvant influencer les champs mécaniques. La détermination de ces champs et de la longueur de résonance des éprouvettes de fatigue a été faite analytiquement et numériquement. Le calcul numérique effectué se base sur la méthode des éléments finis. Dans le but d'une comparaison des solutions analytiques et numériques, deux aciers ont été considérés : un acier martensitique (Soleil A2) et un acier austénitique de type 18-10 (ICL 472 BC). Une parfaite convergence est obtenue entre les deux solutions.

  10. The Numerical Simulation of Coupling Behavior of Soil with Chemical Pollutant Effects

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Li, X. K.; Tang, L. Q.

    2010-05-01

    The coupling behavior of clay plays a role in the integrity of clay barriers used in landfills. The clay barriers are subjected to mechanical and thermal effects coupled with hydraulic behavior, also, if the leachates become in contact with the clay liner, chemical effects may lead to some drastic changes in the properties of the clay. A numerical method to simulate the coupling behavior of soil with chemical pollutant effects is presented. Within the framework of Gens-Alonso model describing the constitutive behavior of unsaturated clay presented in reference[1], basing on the work of Wu[2] and Hueckel[3], a constitutive model describing the chemo-thermo-hydro-mechanical(CTHM) coupling behavior of clays in contact with a single organic contaminant is presented. The thermical softening and chemical softening is considered in the presented model. The strain arising in the material due to chemical and thermical effects can be decomposed into two parts: elastic expansion and plastic compaction. The chemical effects are described in terms of the mass concentration of the contaminant. The increases in temperature and contaminant concentration cause decreases of the pre-consolidation pressure and the cohesion. The mechanisms are called thermical softening and chemical softening. The presented coupled CTHM constitutive model has been integrated into the coupled thermo-hydro-mechanical mathematical model including contaminant transport in porous media. To solve the equilibrium equations, the grogram of finite element methods is developed with a stagger algorithm. The mechanisms taking place due to the coupling behaviour of the clay with a single contaminant solute are analysed with the presented numerical method.

  11. An EMAT-based shear horizontal (SH) wave technique for adhesive bond inspection

    NASA Astrophysics Data System (ADS)

    Arun, K.; Dhayalan, R.; Balasubramaniam, Krishnan; Maxfield, Bruce; Peres, Patrick; Barnoncel, David

    2012-05-01

    The evaluation of adhesively bonded structures has been a challenge over the several decades that these structures have been used. Applications within the aerospace industry often call for particularly high performance adhesive bonds. Several techniques have been proposed for the detection of disbonds and cohesive weakness but a reliable NDE method for detecting interfacial weakness (also sometimes called a kissing bond) has been elusive. Different techniques, including ultrasonic, thermal imaging and shearographic methods, have been proposed; all have had some degree of success. In particular, ultrasonic methods, including those based upon shear and guided waves, have been explored for the assessment of interfacial bond quality. Since 3-D guided shear horizontal (SH) waves in plates have predominantly shear displacement at the plate surfaces, we conjectured that SH guided waves should be influenced by interfacial conditions when they propagate between adhesively bonded plates of comparable thickness. This paper describes a new technique based on SH guided waves that propagate within and through a lap joint. Through mechanisms we have yet to fully understand, the propagation of an SH wave through a lap joint gives rise to a reverberation signal that is due to one or more reflections of an SH guided wave mode within that lap joint. Based upon a combination of numerical simulations and measurements, this method shows promise for detecting and classifying interfacial bonds. It is also apparent from our measurements that the SH wave modes can discriminate between adhesive and cohesive bond weakness in both Aluminum-Epoxy-Aluminum and Composite-Epoxy-Composite lap joints. All measurements reported here used periodic permanent magnet (PPM) Electro-Magnetic Acoustic Transducers (EMATs) to generate either or both of the two lowest order SH modes in the plates that comprise the lap joint. This exact configuration has been simulated using finite element (FE) models to describe the SH mode generation, propagation and reception. Of particular interest is that one SH guided wave mode (probably SH0) reverberates within the lap joint. Moreover, in both simulations and measurements, features of this so-called reverberation signal appear to be related to interfacial weakness between the plate (substrate) and the epoxy bond. The results of a hybrid numerical (FE) approach based on using COMSOL to calculate the driving forces within an elastic solid and ABAQUS to propagate the resulting elastic disturbances (waves) within the plates and lap joint are compared with measurements of SH wave generation and reception in lap joint specimens having different interfacial and cohesive bonding conditions.

  12. Selecting an appropriate method to remove cyanide from the wastewater of Moteh gold mine using a mathematical approach.

    PubMed

    Seyyed Alizadeh Ganji, Seyyed Mohammad; Hayati, Mohammad

    2018-06-05

    The presence of cyanide ions in wastewater is dangerous to the health and life of living creatures, especially humans. Cyanide concentration should not exceed the acceptable limit in wastewaters to avoid their adverse effects to the environment. In this paper, in order to select the most appropriate method to remove cyanide from the wastewater of the Moteh gold mine, based on the experts' opinions, the use of calcium hypochlorite, sodium hypochlorite, and hydrogen peroxide was chosen as forerunning alternative in the form of a multi-stage model. Then, seven criteria including the amount of material consumed, ease of implementation, safety, ability to remove cyanide, pH, time, and cost of the process to assess the considered methods were determined. Afterwards, seven experts conducted numerous experiments to examine the conditions of each of these criteria. Then, by employing a mathematical method called "numerical taxonomy," the use of sodium hypochlorite was suggested as the best method to remove cyanide from the wastewater of the Moteh gold mine. Finally, the TOPSIS model was used to validate the proposed model, which led to the same results of the suggested method. Also, the results of employing taxonomic analysis and TOPSIS method suggested the use of sodium hypochlorite as the best method for cyanide removal from wastewater. In addition, according to the analysis of various experiments, conditions for complete removal of cyanide using sodium hypochlorite included concentration (8.64 g/L), pH (12.3), and temperature (12 °C).

  13. Simultaneous Genotype Calling and Haplotype Phasing Improves Genotype Accuracy and Reduces False-Positive Associations for Genome-wide Association Studies

    PubMed Central

    Browning, Brian L.; Yu, Zhaoxia

    2009-01-01

    We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040

  14. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-07-01

    The study of the electrodynamics of static, axisymmetric, and force-free Kerr magnetospheres relies vastly on solutions of the so-called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give a detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established set-ups (split-monopole, paraboloidal, BH disc, uniform).

  15. Numerically solving the relativistic Grad-Shafranov equation in Kerr spacetimes: Numerical techniques

    NASA Astrophysics Data System (ADS)

    Mahlmann, J. F.; Cerdá-Durán, P.; Aloy, M. A.

    2018-04-01

    The study of the electrodynamics of static, axisymmetric and force-free Kerr magnetospheres relies vastly on solutions of the so called relativistic Grad-Shafranov equation (GSE). Different numerical approaches to the solution of the GSE have been introduced in the literature, but none of them has been fully assessed from the numerical point of view in terms of efficiency and quality of the solutions found. We present a generalization of these algorithms and give detailed background on the algorithmic implementation. We assess the numerical stability of the implemented algorithms and quantify the convergence of the presented methodology for the most established setups (split-monopole, paraboloidal, BH-disk, uniform).

  16. Gutzwiller renormalization group

    DOE PAGES

    Lanatà, Nicola; Yao, Yong -Xin; Deng, Xiaoyu; ...

    2016-01-06

    We develop a variational scheme called the “Gutzwiller renormalization group” (GRG), which enables us to calculate the ground state of Anderson impurity models (AIM) with arbitrary numerical precision. Our method exploits the low-entanglement property of the ground state of local Hamiltonians in combination with the framework of the Gutzwiller wave function and indicates that the ground state of the AIM has a very simple structure, which can be represented very accurately in terms of a surprisingly small number of variational parameters. Furthermore, we perform benchmark calculations of the single-band AIM that validate our theory and suggest that the GRG mightmore » enable us to study complex systems beyond the reach of the other methods presently available and pave the way to interesting generalizations, e.g., to nonequilibrium transport in nanostructures.« less

  17. Obliquity dependence of the tangential YORP

    NASA Astrophysics Data System (ADS)

    Ševeček, P.; Golubov, O.; Scheeres, D. J.; Krugly, Yu. N.

    2016-08-01

    Context. The tangential Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect is a thermophysical effect that can alter the rotation rate of asteroids and is distinct from the so-called normal YORP effect, but to date has only been studied for asteroids with zero obliquity. Aims: We aim to study the tangential YORP force produced by spherical boulders on the surface of an asteroid with an arbitrary obliquity. Methods: A finite element method is used to simulate heat conductivity inside a boulder, to find the recoil force experienced by it. Then an ellipsoidal asteroid uniformly covered by these types of boulders is considered and the torque is numerically integrated over its surface. Results: Tangential YORP is found to operate on non-zero obliquities and decreases by a factor of two for increasing obliquity.

  18. Map-invariant spectral analysis for the identification of DNA periodicities

    PubMed Central

    2012-01-01

    Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324

  19. Onset of Turbulence in a Pipe

    NASA Astrophysics Data System (ADS)

    Böberg, L.; Brösa, U.

    1988-09-01

    Turbulence in a pipe is derived directly from the Navier-Stokes equation. Analysis of numerical simulations revealed that small disturbances called 'mothers' induce other much stronger disturbances called 'daughters'. Daughters determine the look of turbulence, while mothers control the transfer of energy from the basic flow to the turbulent motion. From a practical point of view, ruling mothers means ruling turbulence. For theory, the mother-daughter process represents a mechanism permitting chaotic motion in a linearly stable system. The mechanism relies on a property of the linearized problem according to which the eigenfunctions become more and more collinear as the Reynolds number increases. The mathematical methods are described, comparisons with experiments are made, mothers and daughters are analyzed, also graphically, with full particulars, and the systematic construction of small systems of differential equations to mimic the non-linear process by means as simple as possible is explained. We suggest that more then 20 but less than 180 essential degrees of freedom take part in the onset of turbulence.

  20. Data Format Classification for Autonomous Software Defined Radios

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    2005-01-01

    We present maximum-likelihood (ML) coherent and noncoherent classifiers for discriminating between NRZ and Manchester coded (biphase-L) data formats for binary phase-shift-keying (BPSK) modulation. Such classification of the data format is an essential element of so-called autonomous software defined radio (SDR) receivers (similar to so-called cognitive SDR receivers in the military application) where it is desired that the receiver perform each of its functions by extracting the appropriate knowledge from the received signal and, if possible, with as little information of the other signal parameters as possible. Small and large SNR approximations to the ML classifiers are also proposed that lead to simpler implementation with comparable performance in their respective SNR regions. Numerical performance results obtained by a combination of computer simulation and, wherever possible, theoretical analyses, are presented and comparisons are made among the various configurations based on the probability of misclassification as a performance criterion. Extensions to other modulations such as QPSK are readily accomplished using the same methods described in the paper.

  1. Courant number and unsteady flow computation

    USGS Publications Warehouse

    Lai, Chintu; ,

    1993-01-01

    The Courant number C, the key to unsteady flow computation, is a ratio of physical wave velocity, ??, to computational signal-transmission velocity, ??, i.e., C = ??/??. In this way, it uniquely relates a physical quantity to a mathematical quantity. Because most unsteady open-channel flows are describable by a set of n characteristic equations along n characteristic paths, each represented by velocity ??i, i = 1,2,....,n, there exist as many as n components for the numerator of C. To develop a numerical model, a numerical integration must be made on each characteristic curve from an earlier point to a later point on the curve. Different numerical methods are available in unsteady flow computation due to the different paths along which the numerical integration is actually performed. For the denominator of C, the ?? defined as ?? = ?? 0 = ??x/??t has been customarily used; thus, the Courant number has the familiar form of C?? = ??/??0. This form will be referred to as ???common Courant number??? in this paper. The commonly used numerical criteria C?? for stability, neutral stability and instability, are imprecise or not universal in the sense that r0 does not always reflect the true maximum computational data-transmission speed of the scheme at hand, i.e., Ctau is no indication for the Courant constraint. In view of this , a new Courant number, called the ???natural Courant number???, Cn, that truly reflects the Courant constraint, has been defined. However, considering the numerous advantages inherent in the traditional C??, a useful and meaningful composite Courant number, denoted by C??* has been formulated from C??. It is hoped that the new aspects of the Courant number discussed herein afford the hydraulician a broader perspective, consistent criteria, and unified guidelines, with which to model various unsteady flows.

  2. Use of pellet guns for crowd control in Kashmir: How lethal is "non-lethal"?

    PubMed

    David, Siddarth

    2017-01-01

    The use of pellet guns during the recent unrest in Kashmir as a method of crowd control has been questioned because of several deaths and numerous injuries. Across the world, these rubber pellets have been shown to inflict serious injuries, permanent disability, and death. The volatility of mob violence, inaccuracies in aim of the pellets, over-use of the pellet guns, and the perception of their harmlessness enhances the destructive potential of these so-called non-lethal weapons. There is also the larger ethical question whether any form of pain, however minimal, could be inflicted to control violent crowds.

  3. Piezothermal effect in a spinning gas

    DOE PAGES

    Geyko, V. I.; Fisch, N. J.

    2016-10-13

    A spinning gas, heated adiabatically through axial compression, is known to exhibit a rotation-dependent heat capacity. However, as equilibrium is approached, an effect is identified here wherein the temperature does not grow homogeneously in the radial direction, but develops a temperature differential with the hottest region on axis, at the maximum of the centrifugal potential energy. This phenomenon, which we call a piezothermal effect, is shown to grow bilinearly with the compression rate and the amplitude of the potential. As a result, numerical simulations confirm a simple model of this effect, which can be generalized to other forms of potentialmore » energy and methods of heating.« less

  4. Cyclical parthenogenesis algorithm for layout optimization of truss structures with frequency constraints

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Zolghadr, A.

    2017-08-01

    Structural optimization with frequency constraints is seen as a challenging problem because it is associated with highly nonlinear, discontinuous and non-convex search spaces consisting of several local optima. Therefore, competent optimization algorithms are essential for addressing these problems. In this article, a newly developed metaheuristic method called the cyclical parthenogenesis algorithm (CPA) is used for layout optimization of truss structures subjected to frequency constraints. CPA is a nature-inspired, population-based metaheuristic algorithm, which imitates the reproductive and social behaviour of some animal species such as aphids, which alternate between sexual and asexual reproduction. The efficiency of the CPA is validated using four numerical examples.

  5. Unconventional Hamilton-type variational principle in phase space and symplectic algorithm

    NASA Astrophysics Data System (ADS)

    Luo, En; Huang, Weijiang; Zhang, Hexin

    2003-06-01

    By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.

  6. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  7. Using a derivative-free optimization method for multiple solutions of inverse transport problems

    DOE PAGES

    Armstrong, Jerawan C.; Favorite, Jeffrey A.

    2016-01-14

    Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less

  8. Fast computation of radiation pressure force exerted by multiple laser beams on red blood cell-like particles

    NASA Astrophysics Data System (ADS)

    Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing

    2016-10-01

    Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.

  9. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  10. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  11. Answering the missed call: Initial exploration of cognitive and electrophysiological changes associated with smartphone use and abuse

    PubMed Central

    Hadas, Itay; Lazarovits, Avi; Alyagon, Uri; Eliraz, Daniel; Zangen, Abraham

    2017-01-01

    Background Smartphone usage is now integral to human behavior. Recent studies associate extensive usage with a range of debilitating effects. We sought to determine whether excessive usage is accompanied by measurable neural, cognitive and behavioral changes. Method Subjects lacking previous experience with smartphones (n = 35) were compared to a matched group of heavy smartphone users (n = 16) on numerous behavioral and electrophysiological measures recorded using electroencephalogram (EEG) combined with transcranial magnetic stimulation (TMS) over the right prefrontal cortex (rPFC). In a second longitudinal intervention, a randomly selected sample of the original non-users received smartphones for 3 months while the others served as controls. All measurements were repeated following this intervention. Results Heavy users showed increased impulsivity, hyperactivity and negative social concern. We also found reduced early TMS evoked potentials in the rPFC of this group, which correlated with severity of self-reported inattention problems. Heavy users also obtained lower accuracy rates than nonusers in a numerical processing. Critically, the second part of the experiment revealed that both the numerical processing and social cognition domains are causally linked to smartphone usage. Conclusion Heavy usage was found to be associated with impaired attention, reduced numerical processing capacity, changes in social cognition, and reduced right prefrontal cortex (rPFC) excitability. Memory impairments were not detected. Novel usage over short period induced a significant reduction in numerical processing capacity and changes in social cognition. PMID:28678870

  12. Numerical emulation of Thru-Reflection-Line calibration for the de-embedding of Surface Acoustic Wave devices.

    PubMed

    Mencarelli, D; Djafari-Rouhani, B; Pennec, Y; Pitanti, A; Zanotto, S; Stocchi, M; Pierantoni, L

    2018-06-18

    In this contribution, a rigorous numerical calibration is proposed to characterize the excitation of propagating mechanical waves by interdigitated transducers (IDTs). The transition from IDT terminals to phonon waveguides is modeled by means of a general circuit representation that makes use of Scattering Matrix (SM) formalism. In particular, the three-step calibration approach called the Thru-Reflection-Line (TRL), that is a well-established technique in microwave engineering, has been successfully applied to emulate typical experimental conditions. The proposed procedure is suitable for the synthesis/optimization of surface-acoustic-wave (SAW) based devices: the TRL calibration allows to extract/de-embed the acoustic component, namely resonator or filter, from the outer IDT structure, regardless of complexity and size of the letter. We report, as a result, the hybrid scattering parameters of the IDT transition to a mechanical waveguide formed by a phononic crystal patterned on a piezoelectric AlN membrane, where the effect of a discontinuity from periodic to uniform mechanical waveguide is also characterized. In addition, to ensure the correctness of our numerical calculations, the proposed method has been validated by independent calculations.

  13. Multiple-source multiple-harmonic active vibration control of variable section cylindrical structures: A numerical study

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu

    2016-12-01

    Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.

  14. A numerical test of the topographic bias

    NASA Astrophysics Data System (ADS)

    Sjöberg, L. E.; Joud, M. S. S.

    2018-02-01

    In 1962 A. Bjerhammar introduced the method of analytical continuation in physical geodesy, implying that surface gravity anomalies are downward continued into the topographic masses down to an internal sphere (the Bjerhammar sphere). The method also includes analytical upward continuation of the potential to the surface of the Earth to obtain the quasigeoid. One can show that also the common remove-compute-restore technique for geoid determination includes an analytical continuation as long as the complete density distribution of the topography is not known. The analytical continuation implies that the downward continued gravity anomaly and/or potential are/is in error by the so-called topographic bias, which was postulated by a simple formula of L E Sjöberg in 2007. Here we will numerically test the postulated formula by comparing it with the bias obtained by analytical downward continuation of the external potential of a homogeneous ellipsoid to an inner sphere. The result shows that the postulated formula holds: At the equator of the ellipsoid, where the external potential is downward continued 21 km, the computed and postulated topographic biases agree to less than a millimetre (when the potential is scaled to the unit of metre).

  15. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  16. Zeldovich pancakes in observational data are cold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinckmann, Thejs; Lindholmer, Mikkel; Hansen, Steen

    The present day universe consists of galaxies, galaxy clusters, one-dimensional filaments and two-dimensional sheets or pancakes, all of which combine to form the cosmic web. The so called ''Zeldovich pancakes' are very difficult to observe, because their overdensity is only slightly greater than the average density of the universe. Falco et al. [1] presented a method to identify Zeldovich pancakes in observational data, and these were used as a tool for estimating the mass of galaxy clusters. Here we expand and refine that observational detection method. We study two pancakes on scales of 10 Mpc, identified from spectroscopically observed galaxiesmore » near the Coma cluster, and compare with twenty numerical pancakes.We find that the observed structures have velocity dispersions of about 100 km/sec, which is relatively low compared to typical groups and filaments. These velocity dispersions are consistent with those found for the numerical pancakes. We also confirm that the identified structures are in fact two-dimensional structures. Finally, we estimate the stellar to total mass of the observational pancakes to be 2 · 10{sup −4}, within one order of magnitude, which is smaller than that of clusters of galaxies.« less

  17. Increasing sensitivity in the measurement of heart rate variability: the method of non-stationary RR time-frequency analysis.

    PubMed

    Melkonian, D; Korner, A; Meares, R; Bahramali, H

    2012-10-01

    A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  19. Divertor target shape optimization in realistic edge plasma geometry

    NASA Astrophysics Data System (ADS)

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-07-01

    Tokamak divertor design for next-step fusion reactors heavily relies on numerical simulations of the plasma edge. Currently, the design process is mainly done in a forward approach, where the designer is strongly guided by his experience and physical intuition in proposing divertor shapes, which are then thoroughly assessed by numerical computations. On the other hand, automated design methods based on optimization have proven very successful in the related field of aerodynamic design. By recasting design objectives and constraints into the framework of a mathematical optimization problem, efficient forward-adjoint based algorithms can be used to automatically compute the divertor shape which performs the best with respect to the selected edge plasma model and design criteria. In the past years, we have extended these methods to automated divertor target shape design, using somewhat simplified edge plasma models and geometries. In this paper, we build on and extend previous work to apply these shape optimization methods for the first time in more realistic, single null edge plasma and divertor geometry, as commonly used in current divertor design studies. In a case study with JET-like parameters, we show that the so-called one-shot method is very effective is solving divertor target design problems. Furthermore, by detailed shape sensitivity analysis we demonstrate that the development of the method already at the present state provides physically plausible trends, allowing to achieve a divertor design with an almost perfectly uniform power load for our particular choice of edge plasma model and design criteria.

  20. Enabling the extended compact genetic algorithm for real-parameter optimization by using adaptive discretization.

    PubMed

    Chen, Ying-ping; Chen, Chao-Hong

    2010-01-01

    An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.

  1. Adjoint-Based Sensitivity Kernels for Glacial Isostatic Adjustment in a Laterally Varying Earth

    NASA Astrophysics Data System (ADS)

    Crawford, O.; Al-Attar, D.; Tromp, J.; Mitrovica, J. X.; Austermann, J.; Lau, H. C. P.

    2017-12-01

    We consider a new approach to both the forward and inverse problems in glacial isostatic adjustment. We present a method for forward modelling GIA in compressible and laterally heterogeneous earth models with a variety of linear and non-linear rheologies. Instead of using the so-called sea level equation, which must be solved iteratively, the forward theory we present consists of a number of coupled evolution equations that can be straightforwardly numerically integrated. We also apply the adjoint method to the inverse problem in order to calculate the derivatives of measurements of GIA with respect to the viscosity structure of the Earth. Such derivatives quantify the sensitivity of the measurements to the model. The adjoint method enables efficient calculation of continuous and laterally varying derivatives, allowing us to calculate the sensitivity of measurements of glacial isostatic adjustment to the Earth's three-dimensional viscosity structure. The derivatives have a number of applications within the inverse method. Firstly, they can be used within a gradient-based optimisation method to find a model which minimises some data misfit function. The derivatives can also be used to quantify the uncertainty in such a model and hence to provide understanding of which parts of the model are well constrained. Finally, they enable construction of measurements which provide sensitivity to a particular part of the model space. We illustrate both the forward and inverse aspects with numerical examples in a spherically symmetric earth model.

  2. Evaluating the evaluation of cancer driver genes

    PubMed Central

    Tokheim, Collin J.; Papadopoulos, Nickolas; Kinzler, Kenneth W.; Vogelstein, Bert; Karchin, Rachel

    2016-01-01

    Sequencing has identified millions of somatic mutations in human cancers, but distinguishing cancer driver genes remains a major challenge. Numerous methods have been developed to identify driver genes, but evaluation of the performance of these methods is hindered by the lack of a gold standard, that is, bona fide driver gene mutations. Here, we establish an evaluation framework that can be applied to driver gene prediction methods. We used this framework to compare the performance of eight such methods. One of these methods, described here, incorporated a machine-learning–based ratiometric approach. We show that the driver genes predicted by each of the eight methods vary widely. Moreover, the P values reported by several of the methods were inconsistent with the uniform values expected, thus calling into question the assumptions that were used to generate them. Finally, we evaluated the potential effects of unexplained variability in mutation rates on false-positive driver gene predictions. Our analysis points to the strengths and weaknesses of each of the currently available methods and offers guidance for improving them in the future. PMID:27911828

  3. An approach toward the numerical evaluation of multi-loop Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    2001-12-01

    A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  5. Numerical investigation of supercritical LNG convective heat transfer in a horizontal serpentine tube

    NASA Astrophysics Data System (ADS)

    Han, Chang-Liang; Ren, Jing-Jie; Dong, Wen-Ping; Bi, Ming-Shu

    2016-09-01

    The submerged combustion vaporizer (SCV) is indispensable general equipment for liquefied natural gas (LNG) receiving terminals. In this paper, numerical simulation was conducted to get insight into the flow and heat transfer characteristics of supercritical LNG on the tube-side of SCV. The SST model with enhanced wall treatment method was utilized to handle the coupled wall-to-LNG heat transfer. The thermal-physical properties of LNG under supercritical pressure were used for this study. After the validation of model and method, the effects of mass flux, outer wall temperature and inlet pressure on the heat transfer behaviors were discussed in detail. Then the non-uniformity heat transfer mechanism of supercritical LNG and effect of natural convection due to buoyancy change in the tube was discussed based on the numerical results. Moreover, different flow and heat transfer characteristics inside the bend tube sections were also analyzed. The obtained numerical results showed that the local surface heat transfer coefficient attained its peak value when the bulk LNG temperature approached the so-called pseudo-critical temperature. Higher mass flux could eliminate the heat transfer deteriorations due to the increase of turbulent diffusion. An increase of outer wall temperature had a significant influence on diminishing heat transfer ability of LNG. The maximum surface heat transfer coefficient strongly depended on inlet pressure. Bend tube sections could enhance the heat transfer due to secondary flow phenomenon. Furthermore, based on the current simulation results, a new dimensionless, semi-theoretical empirical correlation was developed for supercritical LNG convective heat transfer in a horizontal serpentine tube. The paper provided the mechanism of heat transfer for the design of high-efficiency SCV.

  6. Quasi-static finite element modeling of seismic attenuation and dispersion due to wave-induced fluid flow in poroelastic media

    NASA Astrophysics Data System (ADS)

    Quintal, Beatriz; Steeb, Holger; Frehner, Marcel; Schmalholz, Stefan M.

    2011-01-01

    The finite element method is used to solve Biot's equations of consolidation in the displacement-pressure (u - p) formulation. We compute one-dimensional (1-D) and two-dimensional (2-D) numerical quasi-static creep tests with poroelastic media exhibiting mesoscopic-scale heterogeneities to calculate the complex and frequency-dependent P wave moduli from the modeled stress-strain relations. The P wave modulus is used to calculate the frequency-dependent attenuation (i.e., inverse of quality factor) and phase velocity of the medium. Attenuation and velocity dispersion are due to fluid flow induced by pressure differences between regions of different compressibilities, e.g., regions (or patches) saturated with different fluids (i.e., so-called patchy saturation). Comparison of our numerical results with analytical solutions demonstrates the accuracy and stability of the algorithm for a wide range of frequencies (six orders of magnitude). The algorithm employs variable time stepping and an unstructured mesh which make it efficient and accurate for 2-D simulations in media with heterogeneities of arbitrary geometries (e.g., curved shapes). We further numerically calculate the quality factor and phase velocity for 1-D layered patchy saturated porous media exhibiting random distributions of patch sizes. We show that the numerical results for the random distributions can be approximated using a volume average of White's analytical solution and the proposed averaging method is, therefore, suitable for a fast and transparent prediction of both quality factor and phase velocity. Application of our results to frequency-dependent reflection coefficients of hydrocarbon reservoirs indicates that attenuation due to wave-induced flow can increase the reflection coefficient at low frequencies, as is observed at some reservoirs.

  7. Flexible Environmental Modeling with Python and Open - GIS

    NASA Astrophysics Data System (ADS)

    Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann

    2015-04-01

    Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.

  8. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and particularly for the evaluation of climate-driven sea level variations.

  9. Non-idealities in the 3ω method for thermal characterization in the low- and high-frequency regimes

    NASA Astrophysics Data System (ADS)

    Jaber, Wassim; Chapuis, Pierre-Olivier

    2018-04-01

    This work is devoted to analytical and numerical studies of diffusive heat conduction in configurations considered in 3ω experiments, which aim at measuring thermal conductivity of materials. The widespread 2D analytical model considers infinite media and translational invariance, a situation which cannot be met in practice in numerous cases due to the constraints in low-dimensional materials and systems. We investigate how thermal boundary resistance between heating wire and sample, native oxide and heating wire shape affect the temperature fields. 3D finite element modelling is also performed to account for the effect of the bonding pads and the 3D heat spreading down to a typical package. Emphasis is given on the low-frequency regime, which is less known than the so-called slope regime. These results will serve as guides for the design of ideal experiments where the 2D model can be applied and for the analyses of non-ideal ones.

  10. Scattering of sound by atmospheric turbulence predictions in a refractive shadow zone

    NASA Technical Reports Server (NTRS)

    Mcbride, Walton E.; Bass, Henry E.; Raspet, Richard; Gilbert, Kenneth E.

    1990-01-01

    According to ray theory, regions exist in an upward refracting atmosphere where no sound should be present. Experiments show, however, that appreciable sound levels penetrate these so-called shadow zones. Two mechanisms contribute to sound in the shadow zone: diffraction and turbulent scattering of sound. Diffractive effects can be pronounced at lower frequencies but are small at high frequencies. In the short wavelength limit, then, scattering due to turbulence should be the predominant mechanism involved in producing the sound levels measured in shadow zones. No existing analytical method includes turbulence effects in the prediction of sound pressure levels in upward refractive shadow zones. In order to obtain quantitative average sound pressure level predictions, a numerical simulation of the effect of atmospheric turbulence on sound propagation is performed. The simulation is based on scattering from randomly distributed scattering centers ('turbules'). Sound pressure levels are computed for many realizations of a turbulent atmosphere. Predictions from the numerical simulation are compared with existing theories and experimental data.

  11. Simulating wave-turbulence on thin elastic plates with arbitrary boundary conditions

    NASA Astrophysics Data System (ADS)

    van Rees, Wim M.; Mahadevan, L.

    2016-11-01

    The statistical characteristics of interacting waves are described by the theory of wave turbulence, with the study of deep water gravity wave turbulence serving as a paradigmatic physical example. Here we consider the elastic analog of this problem in the context of flexural waves arising from vibrations of a thin elastic plate. Such flexural waves generate the unique sounds of so-called thunder machines used in orchestras - thin metal plates that make a thunder-like sound when forcefully shaken. Wave turbulence in elastic plates is typically investigated numerically using spectral simulations with periodic boundary conditions, which are not very realistic. We will present the results of numerical simulations of the dynamics of thin elastic plates in physical space, with arbitrary shapes, boundary conditions, anisotropy and inhomogeneity, and show first results on wave turbulence beyond the conventionally studied rectangular plates. Finally, motivated by a possible method to measure ice-sheet thicknesses in the open ocean, we will further discuss the behavior of a vibrating plate when floating on an inviscid fluid.

  12. Effect of strong disorder on three-dimensional chiral topological insulators: Phase diagrams, maps of the bulk invariant, and existence of topological extended bulk states

    NASA Astrophysics Data System (ADS)

    Song, Juntao; Fine, Carolyn; Prodan, Emil

    2014-11-01

    The effect of strong disorder on chiral-symmetric three-dimensional lattice models is investigated via analytical and numerical methods. The phase diagrams of the models are computed using the noncommutative winding number, as functions of disorder strength and model's parameters. The localized/delocalized characteristic of the quantum states is probed with level statistics analysis. Our study reconfirms the accurate quantization of the noncommutative winding number in the presence of strong disorder, and its effectiveness as a numerical tool. Extended bulk states are detected above and below the Fermi level, which are observed to undergo the so-called "levitation and pair annihilation" process when the system is driven through a topological transition. This suggests that the bulk invariant is carried by these extended states, in stark contrast with the one-dimensional case where the extended states are completely absent and the bulk invariant is carried by the localized states.

  13. Temperature control of the ultra-short laser pulse compression in a one-dimensional photonic band gap structure with nematic liquid crystal as a defect layer

    NASA Astrophysics Data System (ADS)

    Shiri, Ramin; Safari, Ebrahim; Bananej, Alireza

    2018-04-01

    We investigate numerically the controllable chirped pulse compression in a one-dimensional photonic structure containing a nematic liquid crystal defect layer using the temperature dependent refractive index of the liquid crystal. We consider the structure under irradiation by near-infrared ultra-short laser pulses polarized parallel to the liquid crystal director at a normal angle of incidence. It is found that the dispersion behaviour and consequently the compression ability of the system can be changed in a controlled manner due to the variation in the defect temperature. When the temperature increased from 290 to 305 K, the transmitted pulse duration decreased from 75 to 42 fs in the middle of the structure, correspondingly. As a result, a novel low-loss tunable pulse compressor with a really compact size and high compression factor is achieved. The so-called transfer matrix method is utilized for numerical simulations of the band structure and reflection/transmission spectra of the structure under investigation.

  14. A splitting algorithm for a novel regularization of Perona-Malik and application to image restoration

    NASA Astrophysics Data System (ADS)

    Karami, Fahd; Ziad, Lamia; Sadik, Khadija

    2017-12-01

    In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).

  15. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  16. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  17. Solar Activity Forecasting for use in Orbit Prediction

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth

    2001-01-01

    Orbital prediction for satellites in low Earth orbit (LEO) or low planetary orbit depends strongly on exospheric densities. Solar activity forecasting is important in orbital prediction, as the solar UV and EUV inflate the upper atmospheric layers of the Earth and planets, forming the exosphere in which satellites orbit. Geomagnetic effects also relate to solar activity. Because of the complex and ephemeral nature of solar activity, with different cycles varying in strength by more than 100%, many different forecasting techniques have been utilized. The methods range from purely numerical techniques (essentially curve fitting) to numerous oddball schemes, as well as a small subset, called 'Precursor techniques.' The situation can be puzzling, owing to the numerous methodologies involved, somewhat akin to the numerous ether theories near the turn of the last century. Nevertheless, the Precursor techniques alone have a physical basis, namely dynamo theory, which provides a physical explanation for why this subset seems to work. I discuss this solar cycle's predictions, as well as the Sun's observed activity. I also discuss the SODA (Solar Dynamo Amplitude) index, which provides the user with the ability to track the Sun's hidden, interior dynamo magnetic fields. As a result, one may then update solar activity predictions continuously, by monitoring the solar magnetic fields as they change throughout the solar cycle. This paper ends by providing a glimpse into what the next solar cycle (#24) portends.

  18. DEM simulation of dendritic grain random packing: application to metal alloy solidification

    NASA Astrophysics Data System (ADS)

    Olmedilla, Antonio; Založnik, Miha; Combeau, Hervé

    2017-06-01

    The random packing of equiaxed dendritic grains in metal-alloy solidification is numerically simulated and validated via an experimental model. This phenomenon is characterized by a driving force which is induced by the solid-liquid density difference. Thereby, the solid dendritic grains, nucleated in the melt, sediment and pack with a relatively low inertia-to-dissipation ratio, which is the so-called Stokes number. The characteristics of the particle packed porous structure such as solid packing fraction affect the final solidified product. A multi-sphere clumping Discrete Element Method (DEM) approach is employed to predict the solid packing fraction as function of the grain geometry under the solidification conditions. Five different monodisperse noncohesive frictionless particle collections are numerically packed by means of a vertical acceleration: a) three dendritic morphologies; b) spheres and c) one ellipsoidal geometry. In order to validate our numerical results with solidification conditions, the sedimentation and packing of two monodisperse collections (spherical and dendritic) is experimentally carried out in a viscous quiescent medium. The hydrodynamic similarity is respected between the actual phenomenon and the experimental model, that is a low Stokes number, o(10-3). In this way, the experimental average solid packing fraction is employed to validate the numerical model. Eventually, the average packing fraction is found to highly depend on the equiaxed dendritic grain sphericity, with looser packings for lower sphericity.

  19. The Root Cause of the Overheating Problem

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2017-01-01

    Previously we identified the receding flow, where two fluid streams recede from each other, as an open numerical problem, because all well-known numerical fluxes give an anomalous temperature rise, thus called the overheating problem. This phenomenon, although presented in several textbooks, and many previous publications, has scarcely been satisfactorily addressed and the root cause of the overheating problem not well understood. We found that this temperature rise was solely connected to entropy rise and proposed to use the method of characteristics to eradicate the problem. However, the root cause of the entropy production was still unclear. In the present study, we identify the cause of this problem: the entropy rise is rooted in the pressure flux in a finite volume formulation and is implanted at the first time step. It is found theoretically inevitable for all existing numerical flux schemes used in the finite volume setting, as confirmed by numerical tests. This difficulty cannot be eliminated by manipulating time step, grid size, spatial accuracy, etc, although the rate of overheating depends on the flux scheme used. Finally, we incorporate the entropy transport equation, in place of the energy equation, to ensure preservation of entropy, thus correcting this temperature anomaly. Its applicability is demonstrated for some relevant 1D and 2D problems. Thus, the present study validates that the entropy generated ab initio is the genesis of the overheating problem.

  20. Improved locality of the phase-field lattice-Boltzmann model for immiscible fluids at high density ratios

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Mitchell, Travis; Leonardi, Christopher; Bolster, Diogo

    2017-11-01

    Based on phase-field theory, we introduce a robust lattice-Boltzmann equation for modeling immiscible multiphase flows at large density and viscosity contrasts. Our approach is built by modifying the method proposed by Zu and He [Phys. Rev. E 87, 043301 (2013), 10.1103/PhysRevE.87.043301] in such a way as to improve efficiency and numerical stability. In particular, we employ a different interface-tracking equation based on the so-called conservative phase-field model, a simplified equilibrium distribution that decouples pressure and velocity calculations, and a local scheme based on the hydrodynamic distribution functions for calculation of the stress tensor. In addition to two distribution functions for interface tracking and recovery of hydrodynamic properties, the only nonlocal variable in the proposed model is the phase field. Moreover, within our framework there is no need to use biased or mixed difference stencils for numerical stability and accuracy at high density ratios. This not only simplifies the implementation and efficiency of the model, but also leads to a model that is better suited to parallel implementation on distributed-memory machines. Several benchmark cases are considered to assess the efficacy of the proposed model, including the layered Poiseuille flow in a rectangular channel, Rayleigh-Taylor instability, and the rise of a Taylor bubble in a duct. The numerical results are in good agreement with available numerical and experimental data.

  1. Analysis of groundwater flow and stream depletion in L-shaped fluvial aquifers

    NASA Astrophysics Data System (ADS)

    Lin, Chao-Chih; Chang, Ya-Chi; Yeh, Hund-Der

    2018-04-01

    Understanding the head distribution in aquifers is crucial for the evaluation of groundwater resources. This article develops a model for describing flow induced by pumping in an L-shaped fluvial aquifer bounded by impermeable bedrocks and two nearly fully penetrating streams. A similar scenario for numerical studies was reported in Kihm et al. (2007). The water level of the streams is assumed to be linearly varying with distance. The aquifer is divided into two subregions and the continuity conditions of the hydraulic head and flux are imposed at the interface of the subregions. The steady-state solution describing the head distribution for the model without pumping is first developed by the method of separation of variables. The transient solution for the head distribution induced by pumping is then derived based on the steady-state solution as initial condition and the methods of finite Fourier transform and Laplace transform. Moreover, the solution for stream depletion rate (SDR) from each of the two streams is also developed based on the head solution and Darcy's law. Both head and SDR solutions in the real time domain are obtained by a numerical inversion scheme called the Stehfest algorithm. The software MODFLOW is chosen to compare with the proposed head solution for the L-shaped aquifer. The steady-state and transient head distributions within the L-shaped aquifer predicted by the present solution are compared with the numerical simulations and measurement data presented in Kihm et al. (2007).

  2. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  3. Neural computing for numeric-to-symbolic conversion in control systems

    NASA Technical Reports Server (NTRS)

    Passino, Kevin M.; Sartori, Michael A.; Antsaklis, Panos J.

    1989-01-01

    A type of neural network, the multilayer perceptron, is used to classify numeric data and assign appropriate symbols to various classes. This numeric-to-symbolic conversion results in a type of information extraction, which is similar to what is called data reduction in pattern recognition. The use of the neural network as a numeric-to-symbolic converter is introduced, its application in autonomous control is discussed, and several applications are studied. The perceptron is used as a numeric-to-symbolic converter for a discrete-event system controller supervising a continuous variable dynamic system. It is also shown how the perceptron can implement fault trees, which provide useful information (alarms) in a biological system and information for failure diagnosis and control purposes in an aircraft example.

  4. Seamless contiguity method for parallel segmentation of remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling

    2015-12-01

    Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.

  5. Nada: A new code for studying self-gravitating tori around black holes

    NASA Astrophysics Data System (ADS)

    Montero, Pedro J.; Font, José A.; Shibata, Masaru

    2008-09-01

    We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.

  6. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  7. An analysis of hypercritical states in elastic and inelastic systems

    NASA Astrophysics Data System (ADS)

    Kowalczk, Maciej

    The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.

  8. Multifractal Cross Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene

    Complex systems are composed of mutually interacting components and the output values of these components usually exhibit long-range cross-correlations. Using wavelet analysis, we propose a method of characterizing the joint multifractal nature of these long-range cross correlations, a method we call multifractal cross wavelet analysis (MFXWT). We assess the performance of the MFXWT method by performing extensive numerical experiments on the dual binomial measures with multifractal cross correlations and the bivariate fractional Brownian motions (bFBMs) with monofractal cross correlations. For binomial multifractal measures, we find the empirical joint multifractality of MFXWT to be in approximate agreement with the theoretical formula. For bFBMs, MFXWT may provide spurious multifractality because of the wide spanning range of the multifractal spectrum. We also apply the MFXWT method to stock market indices, and in pairs of index returns and volatilities we find an intriguing joint multifractal behavior. The tests on surrogate series also reveal that the cross correlation behavior, particularly the cross correlation with zero lag, is the main origin of cross multifractality.

  9. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  10. The nondeterministic divide

    NASA Technical Reports Server (NTRS)

    Charlesworth, Arthur

    1990-01-01

    The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.

  11. Influence of a New “Call-Out Algorithm” for Management of Postoperative Pain and Its Side Effects on Length of Stay in Hospital: A Two-Centre Prospective Randomized Trial

    PubMed Central

    Dybvik, Lisa; Skraastad, Erlend; Yeltayeva, Aigerim; Konkayev, Aidos; Musaeva, Tatiana; Zabolotskikh, Igor; Dahl, Vegard; Raeder, Johan

    2017-01-01

    Background We recently introduced the efficacy safety score (ESS) as a new “call-out algorithm” for management of postoperative pain and side effects. In this study, we report the influence of ESS recorded hourly during the first 8 hours after surgery on the mobility degree, postoperative nonsurgical complications, and length of hospital stay (LOS). Methods We randomized 1152 surgical patients into three groups for postoperative observation: (1) ESS group (n = 409), (2) Verbal Numeric Rate Scale (VNRS) for pain group (n = 417), and (3) an ordinary qualitative observation (Control) group (n = 326). An ESS > 10 or VNRS > 4 at rest or a nurse's observation of pain or adverse reaction to analgesic treatment in the Control group served as a “call-out alarm” for an anaesthesiologist. Results We found no significant differences in the mobility degree and number of postoperative nonsurgical complications between the groups. LOS was significantly shorter with 12.7 ± 6.3 days (mean ± SD) in the ESS group versus 14.2 ± 6.2 days in the Control group (P < 0.001). Conclusion Postoperative ESS recording in combination with the possibility to call upon an anaesthesiologist when exceeding the threshold score might have contributed to the reductions of LOS in this two-centre study. This trial is registered with NCT02143128. PMID:28855800

  12. Impacts of variable thermal conductivity on stagnation point boundary layer flow past a Riga plate with variable thickness using generalized Fourier's law

    NASA Astrophysics Data System (ADS)

    Shah, S.; Hussain, S.; Sagheer, M.

    2018-06-01

    This article explores the problem of two-dimensional, laminar, steady and boundary layer stagnation point slip flow over a Riga plate. The incompressible upper-convected Maxwell fluid has been considered as a rheological fluid model. The heat transfer characteristics are investigated with generalized Fourier's law. The fluid thermal conductivity is assumed to be temperature dependent in this study. A system of partial differential equations governing the flow of an upper-convected Maxwell fluid, heat and mass transfer using generalized Fourier's law is developed. The main objective of the article is to inspect the impacts of pertinent physical parameters such as the stretching ratio parameter (0 ⩽ A ⩽ 0.3) , Deborah number (0 ⩽ β ⩽ 0.6) , thermal relaxation parameter (0 ⩽ γ ⩽ 0.5) , wall thickness parameter (0.1 ⩽ α ⩽ 3.5) , slip parameter (0 ⩽ R ⩽ 1.5) , thermal conductivity parameter (0.1 ⩽ δ ⩽ 1.0) and modified Hartmann number (0 ⩽ Q ⩽ 3) on the velocity and temperature profiles. Suitable local similarity transformations have been used to get a system of non-linear ODEs from the governing PDEs. The numerical solutions for the dimensionless velocity and temperature distributions have been achieved by employing an effective numerical method called the shooting method. It is seen that the velocity profile shows the reduction in the velocity for the higher values of viscoelastic parameter and the thermal relaxation parameter. In addition, to enhance the reliability at the maximum level of the obtained numerical results by shooting method, a MATLAB built-in solver bvp4c has also been utilized.

  13. ANIE: A mathematical algorithm for automated indexing of planar deformation features in quartz grains

    NASA Astrophysics Data System (ADS)

    Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian

    2011-09-01

    Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.

  14. Extraction of Children's Friendship Relation from Activity Level

    NASA Astrophysics Data System (ADS)

    Kono, Aki; Shintani, Kimio; Katsuki, Takuya; Kihara, Shin'ya; Ueda, Mari; Kaneda, Shigeo; Haga, Hirohide

    Children learn to fit into society through living in a group, and it's greatly influenced by their friend relations. Although preschool teachers need to observe them to assist in the growth of children's social progress and support the development each child's personality, only experienced teachers can watch over children while providing high-quality guidance. To resolve the problem, this paper proposes a mathematical and objective method that assists teachers with observation. It uses numerical data of activity level recorded by pedometers, and we make tree diagram called dendrogram based on hierarchical clustering with recorded activity level. Also, we calculate children's ``breadth'' and ``depth'' of friend relations by using more than one dendrogram. When we record children's activity level in a certain kindergarten for two months and evaluated the proposed method, the results usually coincide with remarks of teachers about the children.

  15. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  16. Network Simulation solution of free convective flow from a vertical cone with combined effect of non- uniform surface heat flux and heat generation or absorption

    NASA Astrophysics Data System (ADS)

    Immanuel, Y.; Pullepu, Bapuji; Sambath, P.

    2018-04-01

    A two dimensional mathematical model is formulated for the transitive laminar free convective, incompressible viscous fluid flow over vertical cone with variable surface heat flux combined with the effects of heat generation and absorption is considered . using a powerful computational method based on thermoelectric analogy called Network Simulation Method (NSM0, the solutions of governing nondimensionl coupled, unsteady and nonlinear partial differential conservation equations of the flow that are obtained. The numerical technique is always stable and convergent which establish high efficiency and accuracy by employing network simulator computer code Pspice. The effects of velocity and temperature profiles have been analyzed for various factors, namely Prandtl number Pr, heat flux power law exponent n and heat generation/absorption parameter Δ are analyzed graphically.

  17. Documentation for the MODFLOW 6 framework

    USGS Publications Warehouse

    Hughes, Joseph D.; Langevin, Christian D.; Banta, Edward R.

    2017-08-10

    MODFLOW is a popular open-source groundwater flow model distributed by the U.S. Geological Survey. Growing interest in surface and groundwater interactions, local refinement with nested and unstructured grids, karst groundwater flow, solute transport, and saltwater intrusion, has led to the development of numerous MODFLOW versions. Often times, there are incompatibilities between these different MODFLOW versions. The report describes a new MODFLOW framework called MODFLOW 6 that is designed to support multiple models and multiple types of models. The framework is written in Fortran using a modular object-oriented design. The primary framework components include the simulation (or main program), Timing Module, Solutions, Models, Exchanges, and Utilities. The first version of the framework focuses on numerical solutions, numerical models, and numerical exchanges. This focus on numerical models allows multiple numerical models to be tightly coupled at the matrix level.

  18. F--Ray: A new algorithm for efficient transport of ionizing radiation

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Zhang, J.; Wandelt, B. D.; Shapiro, P. R.; Iliev, I. T.

    2014-04-01

    We present a new algorithm for the 3D transport of ionizing radiation, called F2-Ray (Fast Fourier Ray-tracing method). The transfer of ionizing radiation with long mean free path in diffuse intergalactic gas poses a special challenge to standard numerical methods which transport the radiation in position space. Standard methods usually trace each individual ray until it is fully absorbed by the intervening gas. If the mean free path is long, the computational cost and memory load are likely to be prohibitive. We have developed an algorithm that overcomes these limitations and is, therefore, significantly more efficient. The method calculates the transfer of radiation collectively, using the Fast Fourier Transform to convert radiation between position and Fourier spaces, so the computational cost will not increase with the number of ionizing sources. The method also automatically combines parallel rays with the same frequency at the same grid cell, thereby minimizing the memory requirement. The method is explicitly photon-conserving, i.e. the depletion of ionizing photons is guaranteed to equal the photoionizations they caused, and explicitly obeys the periodic boundary condition, i.e. the escape of ionizing photons from one side of a simulation volume is guaranteed to be compensated by emitting the same amount of photons into the volume through the opposite side. Together, these features make it possible to numerically simulate the transfer of ionizing photons more efficiently than previous methods. Since ionizing radiation such as the X-ray is responsible for heating the intergalactic gas when first stars and quasars form at high redshifts, our method can be applied to simulate thermal distribution, in addition to cosmic reionization, in three-dimensional inhomogeneous cosmological density field.

  19. Modal identification of structures by a novel approach based on FDD-wavelet method

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2014-02-01

    An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.

  20. Advanced Methods for Aircraft Engine Thrust and Noise Benefits: Nozzle-Inlet Flow Analysis

    NASA Technical Reports Server (NTRS)

    Morgan, Morris H.; Gilinsky, Mikhail M.

    2001-01-01

    Three connected sub-projects were conducted under reported project. Partially, these sub-projects are directed to solving the problems conducted by the HU/FM&AL under two other NASA grants. The fundamental idea uniting these projects is to use untraditional 3D corrugated nozzle designs and additional methods for exhaust jet noise reduction without essential thrust lost and even with thrust augmentation. Such additional approaches are: (1) to add some solid, fluid, or gas mass at discrete locations to the main supersonic gas stream to minimize the negative influence of strong shock waves forming in propulsion systems; this mass addition may be accompanied by heat addition to the main stream as a result of the fuel combustion or by cooling of this stream as a result of the liquid mass evaporation and boiling; (2) to use porous or permeable nozzles and additional shells at the nozzle exit for preliminary cooling of exhaust hot jet and pressure compensation for non-design conditions (so-called continuous ejector with small mass flow rate; and (3) to propose and analyze new effective methods fuel injection into flow stream in air-breathing engines. Note that all these problems were formulated based on detailed descriptions of the main experimental facts observed at NASA Glenn Research Center. Basically, the HU/FM&AL Team has been involved in joint research with the purpose of finding theoretical explanations for experimental facts and the creation of the accurate numerical simulation technique and prediction theory for solutions for current problems in propulsion systems solved by NASA and Navy agencies. The research is focused on a wide regime of problems in the propulsion field as well as in experimental testing and theoretical and numerical simulation analysis for advanced aircraft and rocket engines. The F&AL Team uses analytical methods, numerical simulations, and possible experimental tests at the Hampton University campus. We will present some management activity and theoretical numerical simulation results obtained by the FM&AL Team in the reporting period in accordance with the schedule of the work.

  1. Riemann solvers and Alfven waves in black hole magnetospheres

    NASA Astrophysics Data System (ADS)

    Punsly, Brian; Balsara, Dinshaw; Kim, Jinho; Garain, Sudip

    2016-09-01

    In the magnetosphere of a rotating black hole, an inner Alfven critical surface (IACS) must be crossed by inflowing plasma. Inside the IACS, Alfven waves are inward directed toward the black hole. The majority of the proper volume of the active region of spacetime (the ergosphere) is inside of the IACS. The charge and the totally transverse momentum flux (the momentum flux transverse to both the wave normal and the unperturbed magnetic field) are both determined exclusively by the Alfven polarization. Thus, it is important for numerical simulations of black hole magnetospheres to minimize the dissipation of Alfven waves. Elements of the dissipated wave emerge in adjacent cells regardless of the IACS, there is no mechanism to prevent Alfvenic information from crossing outward. Thus, numerical dissipation can affect how simulated magnetospheres attain the substantial Goldreich-Julian charge density associated with the rotating magnetic field. In order to help minimize dissipation of Alfven waves in relativistic numerical simulations we have formulated a one-dimensional Riemann solver, called HLLI, which incorporates the Alfven discontinuity and the contact discontinuity. We have also formulated a multidimensional Riemann solver, called MuSIC, that enables low dissipation propagation of Alfven waves in multiple dimensions. The importance of higher order schemes in lowering the numerical dissipation of Alfven waves is also catalogued.

  2. Panel methods: An introduction

    NASA Technical Reports Server (NTRS)

    Erickson, Larry L.

    1990-01-01

    Panel methods are numerical schemes for solving (the Prandtl-Glauert equation) for linear, inviscid, irrotational flow about aircraft flying at subsonic or supersonic speeds. The tools at the panel-method user's disposal are (1) surface panels of source-doublet-vorticity distributions that can represent nearly arbitrary geometry, and (2) extremely versatile boundary condition capabilities that can frequently be used for creative modeling. Panel-method capabilities and limitations, basic concepts common to all panel-method codes, different choices that were made in the implementation of these concepts into working computer programs, and various modeling techniques involving boundary conditions, jump properties, and trailing wakes are discussed. An approach for extending the method to nonlinear transonic flow is also presented. Three appendices supplement the main test. In appendix 1, additional detail is provided on how the basic concepts are implemented into a specific computer program (PANAIR). In appendix 2, it is shown how to evaluate analytically the fundamental surface integral that arises in the expressions for influence-coefficients, and evaluate its jump property. In appendix 3, a simple example is used to illustrate the so-called finite part of the improper integrals.

  3. Dislocation-induced stress in polycrystalline materials: mesoscopic simulations in the dislocation density formalism

    NASA Astrophysics Data System (ADS)

    Berkov, D. V.; Gorn, N. L.

    2018-06-01

    In this paper we present a simple and effective numerical method which allows a fast Fourier transformation-based evaluation of stress generated by dislocations with arbitrary directions and Burgers vectors if the (site-dependent) dislocation density is known. Our method allows the evaluation of the dislocation stress using a rectangular grid with shape-anisotropic discretization cells without employing higher multipole moments of the dislocation interaction coefficients. Using the proposed method, we first simulate the stress created by relatively simple non-homogeneous distributions of vertical edge and so-called ‘mixed’ dislocations in a disk-shaped sample, which is necessary to understand the dislocation behavior in more complicated systems. The main part of our research is devoted to the stress distribution in polycrystalline layers with the dislocation density rapidly varying with the distance to the layer bottom. Considering GaN as a typical example of such systems, we investigate dislocation-induced stress for edge and mixed dislocations, having random orientations of Burgers vectors among crystal grains. We show that the rapid decay of the dislocation density leads to many highly non-trivial features of the stress distributions in such layers and study in detail the dependence of these features on the average grain size. Finally we develop an analytical approach which allows us to predict the evolution of the stress variance with the grain size and compare analytical predictions with numerical results.

  4. A new approach to characterize very-low-level radioactive waste produced at hadron accelerators.

    PubMed

    Zaffora, Biagio; Magistris, Matteo; Chevalier, Jean-Pierre; Luccioni, Catherine; Saporta, Gilbert; Ulrici, Luisa

    2017-04-01

    Radioactive waste is produced as a consequence of preventive and corrective maintenance during the operation of high-energy particle accelerators or associated dismantling campaigns. Their radiological characterization must be performed to ensure an appropriate disposal in the disposal facilities. The radiological characterization of waste includes the establishment of the list of produced radionuclides, called "radionuclide inventory", and the estimation of their activity. The present paper describes the process adopted at CERN to characterize very-low-level radioactive waste with a focus on activated metals. The characterization method consists of measuring and estimating the activity of produced radionuclides either by experimental methods or statistical and numerical approaches. We adapted the so-called Scaling Factor (SF) and Correlation Factor (CF) techniques to the needs of hadron accelerators, and applied them to very-low-level metallic waste produced at CERN. For each type of metal we calculated the radionuclide inventory and identified the radionuclides that most contribute to hazard factors. The methodology proposed is of general validity, can be extended to other activated materials and can be used for the characterization of waste produced in particle accelerators and research centres, where the activation mechanisms are comparable to the ones occurring at CERN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Calculation of the critical overdensity in the spherical-collapse approximation

    NASA Astrophysics Data System (ADS)

    Herrera, D.; Waga, I.; Jorás, S. E.

    2017-03-01

    Critical overdensity δc is a key concept in estimating the number count of halos for different redshift and halo-mass bins, and therefore, it is a powerful tool to compare cosmological models to observations. There are currently two different prescriptions in the literature for its calculation, namely, the differential-radius and the constant-infinity methods. In this work we show that the latter yields precise results only if we are careful in the definition of the so-called numerical infinities. Although the subtleties we point out are crucial ingredients for an accurate determination of δc both in general relativity and in any other gravity theory, we focus on f (R )-modified gravity models in the metric approach; in particular, we use the so-called large (F =1 /3 ) and small-field (F =0 ) limits. For both of them, we calculate the relative errors (between our method and the others) in the critical density δc, in the comoving number density of halos per logarithmic mass interval nln M, and in the number of clusters at a given redshift in a given mass bin Nbin, as functions of the redshift. We have also derived an analytical expression for the density contrast in the linear regime as a function of the collapse redshift zc and Ωm 0 for any F .

  6. Compressive Sampling Based Interior Reconstruction for Dynamic Carbon Nanotube Micro-CT

    PubMed Central

    Yu, Hengyong; Cao, Guohua; Burk, Laurel; Lee, Yueh; Lu, Jianping; Santago, Pete; Zhou, Otto; Wang, Ge

    2010-01-01

    In the computed tomography (CT) field, one recent invention is the so-called carbon nanotube (CNT) based field emission x-ray technology. On the other hand, compressive sampling (CS) based interior tomography is a new innovation. Combining the strengths of these two novel subjects, we apply the interior tomography technique to local mouse cardiac imaging using respiration and cardiac gating with a CNT based micro-CT scanner. The major features of our method are: (1) it does not need exact prior knowledge inside an ROI; and (2) two orthogonal scout projections are employed to regularize the reconstruction. Both numerical simulations and in vivo mouse studies are performed to demonstrate the feasibility of our methodology. PMID:19923686

  7. A new version of Stochastic-parallel-gradient-descent algorithm (SPGD) for phase correction of a distorted orbital angular momentum (OAM) beam

    NASA Astrophysics Data System (ADS)

    Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian

    2018-02-01

    Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.

  8. Smooth particle hydrodynamics: theory and application to the origin of the moon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benz, W.

    1986-01-01

    The origin of the moon is modeled by the so-called smooth particle hydrodynamics (SPH) method (Lucy, 1977, Monaghan 1985) which substitutes to the fluid a finite set of extended particles, the hydrodynamics equations reduce to the equation of motion of individual particles. These equations of motion differ only from the standard gravitational N-body problem insofar that pressure gradients and viscosity terms have to be added to the gradient of the potential to derive the forces between the particles. The numerical tools developed for ''classical'' N-body problems can therefore be readily applied to solve 3 dimensional hydroynamical problems. 12 refs., 1more » fig.« less

  9. Characteristic extraction and matching algorithms of ballistic missile in near-space by hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Li; Sheng, Wen; Liu, Shihua; Zhang, Xianzhi

    2014-10-01

    The ballistic missile hyperspectral data of imaging spectrometer from the near-space platform are generated by numerical method. The characteristic of the ballistic missile hyperspectral data is extracted and matched based on two different kinds of algorithms, which called transverse counting and quantization coding, respectively. The simulation results show that two algorithms extract the characteristic of ballistic missile adequately and accurately. The algorithm based on the transverse counting has the low complexity and can be implemented easily compared to the algorithm based on the quantization coding does. The transverse counting algorithm also shows the good immunity to the disturbance signals and speed up the matching and recognition of subsequent targets.

  10. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  11. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  12. Terahertz frequency superconductor-nanocomposite photonic band gap

    NASA Astrophysics Data System (ADS)

    Elsayed, Hussein A.; Aly, Arafa H.

    2018-02-01

    In the present work, we discuss the transmittance properties of one-dimensional (1D) superconductor nanocomposite photonic crystals (PCs) in THz frequency regions. Our modeling is essentially based on the two-fluid model, Maxwell-Garnett model and the characteristic matrix method. The numerical results investigate the appearance of the so-called cutoff frequency. We have obtained the significant effect of some parameters such as the volume fraction, the permittivity of the host material, the size of the nanoparticles and the permittivity of the superconductor material on the properties of the cutoff frequency. The present results may be useful in the optical communications and photonic applications to act as tunable antenna in THz, reflectors and high-pass filter.

  13. Performance Analysis and Design Synthesis (PADS) computer program. Volume 2: Program description, part 2

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The QL module of the Performance Analysis and Design Synthesis (PADS) computer program is described. Execution of this module is initiated when and if subroutine PADSI calls subroutine GROPE. Subroutine GROPE controls the high level logical flow of the QL module. The purpose of the module is to determine a trajectory that satisfies the necessary variational conditions for optimal performance. The module achieves this by solving a nonlinear multi-point boundary value problem. The numerical method employed is described. It is an iterative technique that converges quadratically when it does converge. The three basic steps of the module are: (1) initialization, (2) iteration, and (3) culmination. For Volume 1 see N73-13199.

  14. Computational reacting gas dynamics

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1993-01-01

    In the study of high speed flows at high altitudes, such as that encountered by re-entry spacecrafts, the interaction of chemical reactions and other non-equilibrium processes in the flow field with the gas dynamics is crucial. Generally speaking, problems of this level of complexity must resort to numerical methods for solutions, using sophisticated computational fluid dynamics (CFD) codes. The difficulties introduced by reacting gas dynamics can be classified into three distinct headings: (1) the usually inadequate knowledge of the reaction rate coefficients in the non-equilibrium reaction system; (2) the vastly larger number of unknowns involved in the computation and the expected stiffness of the equations; and (3) the interpretation of the detailed reacting CFD numerical results. The research performed accepts the premise that reacting flows of practical interest in the future will in general be too complex or 'untractable' for traditional analytical developments. The power of modern computers must be exploited. However, instead of focusing solely on the construction of numerical solutions of full-model equations, attention is also directed to the 'derivation' of the simplified model from the given full-model. In other words, the present research aims to utilize computations to do tasks which have traditionally been done by skilled theoreticians: to reduce an originally complex full-model system into an approximate but otherwise equivalent simplified model system. The tacit assumption is that once the appropriate simplified model is derived, the interpretation of the detailed numerical reacting CFD numerical results will become much easier. The approach of the research is called computational singular perturbation (CSP).

  15. Systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a service representative

    DOEpatents

    Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.

    2004-03-09

    The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.

  16. CFD simulation of pulsation noise in a small centrifugal compressor with volute and resonance tube

    NASA Astrophysics Data System (ADS)

    Wakaki, Daich; Sakuka, Yuta; Inokuchi, Yuzo; Ueda, Kosuke; Yamasaki, Nobuhiko; Yamagata, Akihiro

    2015-02-01

    The rotational frequency tone noise emitted from the automobile turbocharger is called the pulsation noise. The cause of the pulsation noise is not fully understood, but is considered to be due to some manufacturing errors, which is called the mistuning. The effects of the mistuning of the impeller blade on the noise field inside the flow passage of the compressor are numerically investigated. Here, the flow passage includes the volute and duct located downstream of the compressor impeller. Our numerical approach is found to successfully capture the wavelength of the pulsation noise at given rotational speeds by the comparison with the experiments. One of the significant findings is that the noise field of the pulsation noise in the duct is highly one-dimensional although the flow fields are highly three-dimensional.

  17. Numerical simulations for tumor and cellular immune system interactions in lung cancer treatment

    NASA Astrophysics Data System (ADS)

    Kolev, M.; Nawrocki, S.; Zubik-Kowal, B.

    2013-06-01

    We investigate a new mathematical model that describes lung cancer regression in patients treated by chemotherapy and radiotherapy. The model is composed of nonlinear integro-differential equations derived from the so-called kinetic theory for active particles and a new sink function is investigated according to clinical data from carcinoma planoepitheliale. The model equations are solved numerically and the data are utilized in order to find their unknown parameters. The results of the numerical experiments show a good correlation between the predicted and clinical data and illustrate that the mathematical model has potential to describe lung cancer regression.

  18. A fictitious domain method for lithosphere-asthenosphere interaction: Application to periodic slab folding in the upper mantle

    NASA Astrophysics Data System (ADS)

    Cerpa, Nestor G.; Hassani, Riad; Gerbault, Muriel; Prévost, Jean-Herve

    2014-05-01

    We present a new approach for the lithosphere-asthenosphere interaction in subduction zones. The lithosphere is modeled as a Maxwell viscoelastic body sinking in the viscous asthenosphere. Both domains are discretized by the finite element method, and we use a staggered coupling method. The interaction is provided by a nonmatching interface method called the fictitious domain method. We describe a simplified formulation of this numerical technique and present 2-D examples and benchmarks. We aim at studying the effect of mantle viscosity on the cyclicity of slab folding at the 660 km depth transition zone. Such cyclicity has previously been shown to occur depending on the kinematics of both the overriding and subducting plates, in analog and numerical models that approximate the 660 km depth transition zone as an impenetrable barrier. Here we applied far-field plate velocities corresponding to those of the South-American and Nazca plates at present. Our models show that the viscosity of the asthenosphere impacts on folding cyclicity and consequently on the slab's dip as well as the stress regime of the overriding plate. Values of the mantle viscosity between 3 and 5 × 1020 Pa s are found to produce cycles similar to those reported for the Andes, which are of the order of 30-40 Myr (based on magmatism and sedimentological records). Moreover, we discuss the episodic development of horizontal subduction induced by cyclic folding and, hence, propose a new explanation for episodes of flat subduction under the South-American plate.

  19. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  20. Modeling and Evaluation of Geophysical Methods for Monitoring and Tracking CO2 Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Jeff

    2012-11-30

    Geological sequestration has been proposed as a viable option for mitigating the vast amount of CO{sub 2} being released into the atmosphere daily. Test sites for CO{sub 2} injection have been appearing across the world to ascertain the feasibility of capturing and sequestering carbon dioxide. A major concern with full scale implementation is monitoring and verifying the permanence of injected CO{sub 2}. Geophysical methods, an exploration industry standard, are non-invasive imaging techniques that can be implemented to address that concern. Geophysical methods, seismic and electromagnetic, play a crucial role in monitoring the subsurface pre- and post-injection. Seismic techniques have beenmore » the most popular but electromagnetic methods are gaining interest. The primary goal of this project was to develop a new geophysical tool, a software program called GphyzCO2, to investigate the implementation of geophysical monitoring for detecting injected CO{sub 2} at test sites. The GphyzCO2 software consists of interconnected programs that encompass well logging, seismic, and electromagnetic methods. The software enables users to design and execute 3D surface-to-surface (conventional surface seismic) and borehole-to-borehole (cross-hole seismic and electromagnetic methods) numerical modeling surveys. The generalized flow of the program begins with building a complex 3D subsurface geological model, assigning properties to the models that mimic a potential CO{sub 2} injection site, numerically forward model a geophysical survey, and analyze the results. A test site located in Warren County, Ohio was selected as the test site for the full implementation of GphyzCO2. Specific interest was placed on a potential reservoir target, the Mount Simon Sandstone, and cap rock, the Eau Claire Formation. Analysis of the test site included well log data, physical property measurements (porosity), core sample resistivity measurements, calculating electrical permittivity values, seismic data collection, and seismic interpretation. The data was input into GphyzCO2 to demonstrate a full implementation of the software capabilities. Part of the implementation investigated the limits of using geophysical methods to monitor CO{sub 2} injection sites. The results show that cross-hole EM numerical surveys are limited to under 100 meter borehole separation. Those results were utilized in executing numerical EM surveys that contain hypothetical CO{sub 2} injections. The outcome of the forward modeling shows that EM methods can detect the presence of CO{sub 2}.« less

  1. A memory like a female Fur Seal: long-lasting recognition of pup's voice by mothers.

    PubMed

    Mathevon, Nicolas; Charrier, Isabelle; Aubin, Thierry

    2004-06-01

    In colonial mammals like fur seals, mutual vocal recognition between mothers and their pup is of primary importance for breeding success. Females alternate feeding sea-trips with suckling periods on land, and when coming back from the ocean, they have to vocally find their offspring among numerous similar-looking pups. Young fur seals emit a 'mother-attraction call' that presents individual characteristics. In this paper, we review the perceptual process of pup's call recognition by Subantarctic Fur Seal Arctocephalus tropicalis mothers. To identify their progeny, females rely on the frequency modulation pattern and spectral features of this call. As the acoustic characteristics of a pup's call change throughout the lactation period due to the growing process, mothers have thus to refine their memorization of their pup's voice. Field experiments show that female Fur Seals are able to retain all the successive versions of their pup's call.

  2. Numerical modeling of a nonmonotonic separation hydrocyclone curve

    NASA Astrophysics Data System (ADS)

    Min'kov, L. L.; Dueck, J. H.

    2012-11-01

    In the context of the mechanics of interpenetrating continua, numerical modeling of separation of a polydisperse suspension in a hydrocyclone is carried out. The so-called "mixture model" valid for a low volume fraction of particles and low Stokes numbers is used for description of the suspension and particle motion. It is shown that account taken of the interaction between large and small particles can explain the nonmonotonic behavior of the separation curve.

  3. A quantum algorithm for obtaining the lowest eigenstate of a Hamiltonian assisted with an ancillary qubit system

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Lee, Seung-Woo; Lee, Chang-Woo; Jeong, Hyunseok

    2015-01-01

    We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to 1. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in Xu (Nat Photonics 8:113, 2014). The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best `cooling' with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number of iterations is proportional to , where is the difference between the two lowest eigenvalues and is an error defined as the probability that the finally obtained system state is in an unexpected (i.e., not the lowest) eigenstate.

  4. Numerical Simulation of the Generation of Axisymmetric Mode Jet Screech Tones

    NASA Technical Reports Server (NTRS)

    Shen, Hao; Tam, Christopher K. W.

    1998-01-01

    An imperfectly expanded supersonic jet, invariably, radiates both broadband noise and discrete frequency sound called screech tones. Screech tones are known to be generated by a feedback loop driven by the large scale instability waves of the jet flow. Inside the jet plume is a quasi-periodic shock cell structure. The interaction of the instability waves and the shock cell structure, as the former propagates through the latter, is responsible for the generation of the tones. Presently, there are formulas that can predict the tone frequency fairly accurately. However, there is no known way to predict the screech tone intensity. In this work, the screech phenomenon of an axisymmetric jet at low supersonic Mach number is reproduced by numerical simulation. The computed mean velocity profiles and the shock cell pressure distribution of the jet are found to be in good agreement with experimental measurements. The same is true with the simulated screech frequency. Calculated screech tone intensity and directivity at selected jet Mach number are reported in this paper. The present results demonstrate that numerical simulation using computational aeroacoustics methods offers not only a reliable way to determine the screech tone intensity and directivity but also an opportunity to study the physics and detailed mechanisms of the phenomenon by an entirely new approach.

  5. Numerical simulation of the nonlinear response of composite plates under combined thermal and acoustic loading

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Moorthy, Jayashree

    1995-01-01

    A time-domain study of the random response of a laminated plate subjected to combined acoustic and thermal loads is carried out. The features of this problem also include given uniform static inplane forces. The formulation takes into consideration a possible initial imperfection in the flatness of the plate. High decibel sound pressure levels along with high thermal gradients across thickness drive the plate response into nonlinear regimes. This calls for the analysis to use von Karman large deflection strain-displacement relationships. A finite element model that combines the von Karman strains with the first-order shear deformation plate theory is developed. The development of the analytical model can accommodate an anisotropic composite laminate built up of uniformly thick layers of orthotropic, linearly elastic laminae. The global system of finite element equations is then reduced to a modal system of equations. Numerical simulation using a single-step algorithm in the time-domain is then carried out to solve for the modal coordinates. Nonlinear algebraic equations within each time-step are solved by the Newton-Raphson method. The random gaussian filtered white noise load is generated using Monte Carlo simulation. The acoustic pressure distribution over the plate is capable of accounting for a grazing incidence wavefront. Numerical results are presented to study a variety of cases.

  6. The challenges of numerically simulating analogue brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Buiter, Susanne; Ellis, Susan

    2017-04-01

    Fold-and-thrust belts and accretionary wedges form when sedimentary and crustal rocks are compressed into thrusts and folds in the foreland of an orogen or at a subduction trench. For over a century, analogue models have been used to investigate the deformation characteristics of such brittle wedges. These models predict wedge shapes that agree with analytical critical taper theory and internal deformation structures that well resemble natural observations. In a series of comparison experiments for thrust wedges, called the GeoMod2004 (1,2) and GeoMod2008 (3,4) experiments, it was shown that different numerical solution methods successfully reproduce sandbox thrust wedges. However, the GeoMod2008 benchmark also pointed to the difficulties of representing frictional boundary conditions and sharp velocity discontinuities with continuum numerical methods, in addition to the well-known challenges of numerical plasticity. Here we show how details in the numerical implementation of boundary conditions can substantially impact numerical wedge deformation. We consider experiment 1 of the GeoMod2008 brittle thrust wedge benchmarks. This experiment examines a triangular thrust wedge in the stable field of critical taper theory that should remain stable, that is, without internal deformation, when sliding over a basal frictional surface. The thrust wedge is translated by lateral displacement of a rigid mobile wall. The corner between the mobile wall and the subsurface is a velocity discontinuity. Using our finite-element code SULEC, we show how different approaches to implementing boundary friction (boundary layer or contact elements) and the velocity discontinuity (various smoothing schemes) can cause the wedge to indeed translate in a stable manner or to undergo internal deformation (which is a fail). We recommend that numerical studies of sandbox setups not only report the details of their implementation of boundary conditions, but also document the modelling attempts that failed. References 1. Buiter and the GeoMod2004 Team, 2006. The numerical sandbox: comparison of model results for a shortening and an extension experiment. Geol. Soc. Lond. Spec. Publ. 253, 29-64 2. Schreurs and the GeoMod2004 Team, 2006. Analogue benchmarks of shortening and extension experiments. Geol. Soc. Lond. Spec. Publ. 253, 1-27 3. Buiter, Schreurs and the GeoMod2008 Team, 2016. Benchmarking numerical models of brittle thrust wedges, J. Struct. Geol. 92, 140-177 4. Schreurs, Buiter and the GeoMod2008 Team, 2016. Benchmarking analogue models of brittle thrust wedges, J. Struct. Geol. 92, 116-13

  7. Getting the most out of RNA-seq data analysis.

    PubMed

    Khang, Tsung Fei; Lau, Ching Yee

    2015-01-01

    Background. A common research goal in transcriptome projects is to find genes that are differentially expressed in different phenotype classes. Biologists might wish to validate such gene candidates experimentally, or use them for downstream systems biology analysis. Producing a coherent differential gene expression analysis from RNA-seq count data requires an understanding of how numerous sources of variation such as the replicate size, the hypothesized biological effect size, and the specific method for making differential expression calls interact. We believe an explicit demonstration of such interactions in real RNA-seq data sets is of practical interest to biologists. Results. Using two large public RNA-seq data sets-one representing strong, and another mild, biological effect size-we simulated different replicate size scenarios, and tested the performance of several commonly-used methods for calling differentially expressed genes in each of them. We found that, when biological effect size was mild, RNA-seq experiments should focus on experimental validation of differentially expressed gene candidates. Importantly, at least triplicates must be used, and the differentially expressed genes should be called using methods with high positive predictive value (PPV), such as NOISeq or GFOLD. In contrast, when biological effect size was strong, differentially expressed genes mined from unreplicated experiments using NOISeq, ASC and GFOLD had between 30 to 50% mean PPV, an increase of more than 30-fold compared to the cases of mild biological effect size. Among methods with good PPV performance, having triplicates or more substantially improved mean PPV to over 90% for GFOLD, 60% for DESeq2, 50% for NOISeq, and 30% for edgeR. At a replicate size of six, we found DESeq2 and edgeR to be reasonable methods for calling differentially expressed genes at systems level analysis, as their PPV and sensitivity trade-off were superior to the other methods'. Conclusion. When biological effect size is weak, systems level investigation is not possible using RNAseq data, and no meaningful result can be obtained in unreplicated experiments. Nonetheless, NOISeq or GFOLD may yield limited numbers of gene candidates with good validation potential, when triplicates or more are available. When biological effect size is strong, NOISeq and GFOLD are effective tools for detecting differentially expressed genes in unreplicated RNA-seq experiments for qPCR validation. When triplicates or more are available, GFOLD is a sharp tool for identifying high confidence differentially expressed genes for targeted qPCR validation; for downstream systems level analysis, combined results from DESeq2 and edgeR are useful.

  8. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  9. SSAW: A new sequence similarity analysis method based on the stationary discrete wavelet transform.

    PubMed

    Lin, Jie; Wei, Jing; Adjeroh, Donald; Jiang, Bing-Hua; Jiang, Yue

    2018-05-02

    Alignment-free sequence similarity analysis methods often lead to significant savings in computational time over alignment-based counterparts. A new alignment-free sequence similarity analysis method, called SSAW is proposed. SSAW stands for Sequence Similarity Analysis using the Stationary Discrete Wavelet Transform (SDWT). It extracts k-mers from a sequence, then maps each k-mer to a complex number field. Then, the series of complex numbers formed are transformed into feature vectors using the stationary discrete wavelet transform. After these steps, the original sequence is turned into a feature vector with numeric values, which can then be used for clustering and/or classification. Using two different types of applications, namely, clustering and classification, we compared SSAW against the the-state-of-the-art alignment free sequence analysis methods. SSAW demonstrates competitive or superior performance in terms of standard indicators, such as accuracy, F-score, precision, and recall. The running time was significantly better in most cases. These make SSAW a suitable method for sequence analysis, especially, given the rapidly increasing volumes of sequence data required by most modern applications.

  10. Analysis of local warm forming of high strength steel using near infrared ray energy

    NASA Astrophysics Data System (ADS)

    Yang, W. H.; Lee, K.; Lee, E. H.; Yang, D. Y.

    2013-12-01

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infrared ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.

  11. MuSCoWERT: multi-scale consistence of weighted edge Radon transform for horizon detection in maritime images.

    PubMed

    Prasad, Dilip K; Rajan, Deepu; Rachmawati, Lily; Rajabally, Eshan; Quek, Chai

    2016-12-01

    This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.

  12. An Impulse Based Substructuring approach for impact analysis and load case simulations

    NASA Astrophysics Data System (ADS)

    Rixen, Daniel J.; van der Valk, Paul L. C.

    2013-12-01

    In the present paper we outline the basic theory of assembling substructures for which the dynamics are described as Impulse Response Functions. The assembly procedure computes the time response of a system by evaluating per substructure the convolution product between the Impulse Response Functions and the applied forces, including the interface forces that are computed to satisfy the interface compatibility. We call this approach the Impulse Based Substructuring method since it transposes to the time domain the Frequency Based Substructuring approach. In the Impulse Based Substructuring technique the Impulse Response Functions of the substructures can be gathered either from experimental tests using a hammer impact or from time-integration of numerical submodels. In this paper the implementation of the method is outlined for the case when the impulse responses of the substructures are computed numerically. A simple bar example is shown in order to illustrate the concept. The Impulse Based Substructuring allows fast evaluation of impact response of a structure when the impulse response of its components is known. It can thus be used to efficiently optimize designs of consumer products by including impact behavior at the early stage of the design, but also for performing substructured simulations of complex structures such as offshore wind turbines.

  13. Numerical study of a permanent magnet linear generator for ship motion energy conversion

    NASA Astrophysics Data System (ADS)

    Mahmuddin, Faisal; Gunadin, Indar Chaerah; Akhir, Anshar Yaumil

    2017-02-01

    In order to harvest kinetic energy of a ship moving in waves, a permanent magnet linear generator is designed and simulated in the present study. For the sake of simplicity, only heave motion which will be considered in this preliminary study. The dimension of the generator is designed based on the dimension of the ship. Moreover, in order to designed an optimal design of rotor and stator, the average vertical displacement of heave motion is needed. For this purpose, a numerical method called New Strip Method (NSM) is employed to compute the motions of the ship. With NSM, the ship hull is divided into several strips and the hydrodynamics forces are computed on each strip. Moreover, because the ship is assumed to be slender, the total forces are obtained by integrating the force on each strip. After the motions can be determined, the optimal design of the generator is designed and simulated. The performance of the generator in terms of force, magnetic flux, losses, current and induced voltage which are the primary parameters of the linear generator performance, are evaluated using a finite element analysis software named Maxwell. From the study, a linear generator for converting heave motions is designed so that the produced power from the designed generator can be determined.

  14. Evaluation of Neutron-induced Cross Sections and their Related Covariances with Physical Constraints

    NASA Astrophysics Data System (ADS)

    De Saint Jean, C.; Archier, P.; Privas, E.; Noguère, G.; Habert, B.; Tamagno, P.

    2018-02-01

    Nuclear data, along with numerical methods and the associated calculation schemes, continue to play a key role in reactor design, reactor core operating parameters calculations, fuel cycle management and criticality safety calculations. Due to the intensive use of Monte-Carlo calculations reducing numerical biases, the final accuracy of neutronic calculations increasingly depends on the quality of nuclear data used. This paper gives a broad picture of all ingredients treated by nuclear data evaluators during their analyses. After giving an introduction to nuclear data evaluation, we present implications of using the Bayesian inference to obtain evaluated cross sections and related uncertainties. In particular, a focus is made on systematic uncertainties appearing in the analysis of differential measurements as well as advantages and drawbacks one may encounter by analyzing integral experiments. The evaluation work is in general done independently in the resonance and in the continuum energy ranges giving rise to inconsistencies in evaluated files. For future evaluations on the whole energy range, we call attention to two innovative methods used to analyze several nuclear reaction models and impose constraints. Finally, we discuss suggestions for possible improvements in the evaluation process to master the quantification of uncertainties. These are associated with experiments (microscopic and integral), nuclear reaction theories and the Bayesian inference.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, W. H., E-mail: whyang21@hyundai.com; Lee, K., E-mail: klee@deform.co.kr; Lee, E. H., E-mail: mtgs2@kaist.ac.kr, E-mail: dyyang@kaist.ac.kr

    The automotive industry has been pressed to satisfy more rigorous fuel efficiency requirements to promote energy conservation, safety features and cost containment. To satisfy this need, high strength steel has been developed and used for many different vehicle parts. The use of high strength steels, however, requires careful analysis and creativity in order to accommodate its relatively high springback behavior. An innovative method, called local warm forming with near infrared ray, has been developed to help promote the use of high strength steels in sheet metal forming. For this method, local regions of the work piece are heated using infraredmore » ray energy, thereby promoting the reduction of springback behavior. In this research, a V-bend test is conducted with DP980. After springback, the bend angles for specimens without local heating are compared to those with local heating. Numerical analysis has been performed using the commercial program, DEFORM-2D. This analysis is carried out with the purpose of understanding how changes to the local stress distribution will affect the springback during the unloading process. The results between experimental and computational approaches are evaluated to assure the accuracy of the simulation. Subsequent numerical simulation studies are performed to explore best practices with respect to thermal boundary conditions, timing, and applicability to the production environment.« less

  16. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  17. Slowing techniques for loading a magneto-optical trap of CaF molecules

    NASA Astrophysics Data System (ADS)

    Truppe, Stefan; Fitch, Noah; Williams, Hannah; Hambach, Moritz; Sauer, Ben; Hinds, Ed; Tarbutt, Mike

    2016-05-01

    Ultracold molecules in a magneto-optical trap (MOT) are useful for testing fundamental physics and studying strongly-interacting quantum systems. With experiments starting with a relatively fast (50-200 m/s) buffer-gas beam, a primary concern is decelerating molecules to below the MOT capture velocity, typically 10 m/s. Direct laser cooling, where the molecules are slowed via momentum transfer from a chirped counter-propagating narrowband laser, is a natural choice. However, chirping the cooling and repump lasers requires precise control of multiple laser frequencies simultaneously. Another approach, called ``white-light slowing'' uses a broadband laser such that all fast molecules in the beam are decelerated. By addressing numerous velocities no chirping is needed. Unfortunately, both techniques have significant losses as molecules are transversely heated during the optical cycling. Ideally, the slowing method would provide simultaneous deceleration and transverse guiding. A newly developed technique, called Zeeman-Sisyphus deceleration, is potentially capable of both. Using permanent magnets and optical pumping, the number of scattered photons is reduced, lessening transverse heating and relaxing the repump requirements. Here we compare all three options for CaF.

  18. The Reverse Time Migration technique coupled with Interior Penalty Discontinuous Galerkin method.

    NASA Astrophysics Data System (ADS)

    Baldassari, C.; Barucq, H.; Calandra, H.; Denel, B.; Diaz, J.

    2009-04-01

    Seismic imaging is based on the seismic reflection method which produces an image of the subsurface from reflected waves recordings by using a tomography process and seismic migration is the industrial standard to improve the quality of the images. The migration process consists in replacing the recorded wavefields at their actual place by using various mathematical and numerical methods but each of them follows the same schedule, according to the pioneering idea of Claerbout: numerical propagation of the source function (propagation) and of the recorded wavefields (retropropagation) and next, construction of the image by applying an imaging condition. The retropropagation step can be realized accouting for the time reversibility of the wave equation and the resulting algorithm is currently called Reverse Time Migration (RTM). To be efficient, especially in three dimensional domain, the RTM requires the solution of the full wave equation by fast numerical methods. Finite element methods are considered as the best discretization method for solving the wave equation, even if they lead to the solution of huge systems with several millions of degrees of freedom, since they use meshes adapted to the domain topography and the boundary conditions are naturally taken into account in the variational formulation. Among the different finite element families, the spectral element one (SEM) is very interesting because it leads to a diagonal mass matrix which dramatically reduces the cost of the numerical computation. Moreover this method is very accurate since it allows the use of high order finite elements. However, SEM uses meshes of the domain made of quadrangles in 2D or hexaedra in 3D which are difficult to compute and not always suitable for complex topographies. Recently, Grote et al. applied the IPDG (Interior Penalty Discontinuous Galerkin) method to the wave equation. This approach is very interesting since it relies on meshes with triangles in 2D or tetrahedra in 3D, which allows to handle the topography of the domain very accurately. Moreover, the fact that the resulting mass matrix is block-diagonal and that IPDG is compatible with the use of high-order finite element may let us suppose that its performances are similar to the ones of the SEM. In this presentation, we study the performances of IDPG through numerical comparisons with the SEM in 1D and 2D. We compare in particular the accuracy of the solutions obtained by the two methods with various order of approximation and the computational burden of the algorithms. The conclusion is IPDG and SEM perform similarly when considering low order finite elements while IPDG outperforms SEM in case of high order finite elements. Next we illustrate the impact of IPDG on the RTM, first through a simple configuration test (two-layered medium), then through realistic industrial applications in 2D.

  19. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    PubMed Central

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  20. Inspiral, merger, and ringdown of unequal mass black hole binaries: A multipolar analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berti, Emanuele; Cardoso, Vitor; Gonzalez, Jose A.

    We study the inspiral, merger, and ringdown of unequal mass black hole binaries by analyzing a catalogue of numerical simulations for seven different values of the mass ratio (from q=M{sub 2}/M{sub 1}=1 to q=4). We compare numerical and post-Newtonian results by projecting the waveforms onto spin-weighted spherical harmonics, characterized by angular indices (l,m). We find that the post-Newtonian equations predict remarkably well the relation between the wave amplitude and the orbital frequency for each (l,m), and that the convergence of the post-Newtonian series to the numerical results is nonmonotonic. To leading order, the total energy emitted in the merger phasemore » scales like {eta}{sup 2} and the spin of the final black hole scales like {eta}, where {eta}=q/(1+q){sup 2} is the symmetric mass ratio. We study the multipolar distribution of the radiation, finding that odd-l multipoles are suppressed in the equal mass limit. Higher multipoles carry a larger fraction of the total energy as q increases. We introduce and compare three different definitions for the ringdown starting time. Applying linear-estimation methods (the so-called Prony methods) to the ringdown phase, we find resolution-dependent time variations in the fitted parameters of the final black hole. By cross correlating information from different multipoles, we show that ringdown fits can be used to obtain precise estimates of the mass and spin of the final black hole, which are in remarkable agreement with energy and angular momentum balance calculations.« less

  1. On concentrated solute sources in faulted aquifers

    NASA Astrophysics Data System (ADS)

    Robinson, N. I.; Werner, A. D.

    2017-06-01

    Finite aperture faults and fractures within aquifers (collectively called 'faults' hereafter) theoretically enable flowing water to move through them but with refractive displacement, both on entry and exit. When a 2D or 3D point source of solute concentration is located upstream of the fault, the plume emanating from the source relative to one in a fault-free aquifer is affected by the fault, both before it and after it. Previous attempts to analyze this situation using numerical methods faced challenges in overcoming computational constraints that accompany requisite fine mesh resolutions. To address these, an analytical solution of this problem is developed and interrogated using statistical evaluation of solute distributions. The method of solution is based on novel spatial integral representations of the source with axes rotated from the direction of uniform water flow and aligning with fault faces and normals. Numerical exemplification is given to the case of a 2D steady state source, using various parameter combinations. Statistical attributes of solute plumes show the relative impact of parameters, the most important being, fault rotation, aperture and conductivity ratio. New general observations of fault-affected solution plumes are offered, including: (a) the plume's mode (i.e. peak concentration) on the downstream face of the fault is less displaced than the refracted groundwater flowline, but at some distance downstream of the fault, these realign; (b) porosities have no influence in steady state calculations; (c) previous numerical modeling results of barrier faults show significant boundary effects. The current solution adds to available benchmark problems involving fractures, faults and layered aquifers, in which grid resolution effects are often barriers to accurate simulation.

  2. Dispersion analysis of leaky guided waves in fluid-loaded waveguides of generic shape.

    PubMed

    Mazzotti, M; Marzani, A; Bartoli, I

    2014-01-01

    A fully coupled 2.5D formulation is proposed to compute the dispersive parameters of waveguides with arbitrary cross-section immersed in infinite inviscid fluids. The discretization of the waveguide is performed by means of a Semi-Analytical Finite Element (SAFE) approach, whereas a 2.5D BEM formulation is used to model the impedance of the surrounding infinite fluid. The kernels of the boundary integrals contain the fundamental solutions of the space Fourier-transformed Helmholtz equation, which governs the wave propagation process in the fluid domain. Numerical difficulties related to the evaluation of singular integrals are avoided by using a regularization procedure. To improve the numerical stability of the discretized boundary integral equations for the external Helmholtz problem, the so called CHIEF method is used. The discrete wave equation results in a nonlinear eigenvalue problem in the complex axial wavenumbers that is solved at the frequencies of interest by means of a contour integral algorithm. In order to separate physical from non-physical solutions and to fulfill the requirement of holomorphicity of the dynamic stiffness matrix inside the complex wavenumber contour, the phase of the radial bulk wavenumber is uniquely defined by enforcing the Snell-Descartes law at the fluid-waveguide interface. Three numerical applications are presented. The computed dispersion curves for a circular bar immersed in oil are in agreement with those extracted using the Global Matrix Method. Novel results are presented for viscoelastic steel bars of square and L-shaped cross-section immersed in water. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  4. Computing the Sensitivity Kernels for 2.5-D Seismic Waveform Inversion in Heterogeneous, Anisotropic Media

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-10-01

    2.5-D modeling and inversion techniques are much closer to reality than the simple and traditional 2-D seismic wave modeling and inversion. The sensitivity kernels required in full waveform seismic tomographic inversion are the Fréchet derivatives of the displacement vector with respect to the independent anisotropic model parameters of the subsurface. They give the sensitivity of the seismograms to changes in the model parameters. This paper applies two methods, called `the perturbation method' and `the matrix method', to derive the sensitivity kernels for 2.5-D seismic waveform inversion. We show that the two methods yield the same explicit expressions for the Fréchet derivatives using a constant-block model parameterization, and are available for both the line-source (2-D) and the point-source (2.5-D) cases. The method involves two Green's function vectors and their gradients, as well as the derivatives of the elastic modulus tensor with respect to the independent model parameters. The two Green's function vectors are the responses of the displacement vector to the two directed unit vectors located at the source and geophone positions, respectively; they can be generally obtained by numerical methods. The gradients of the Green's function vectors may be approximated in the same manner as the differential computations in the forward modeling. The derivatives of the elastic modulus tensor with respect to the independent model parameters can be obtained analytically, dependent on the class of medium anisotropy. Explicit expressions are given for two special cases—isotropic and tilted transversely isotropic (TTI) media. Numerical examples are given for the latter case, which involves five independent elastic moduli (or Thomsen parameters) plus one angle defining the symmetry axis.

  5. Theoretical model for Sub-Doppler Cooling with EIT System

    NASA Astrophysics Data System (ADS)

    He, Peiru; Tengdin, Phoebe; Anderson, Dana; Rey, Ana Maria; Holland, Murray

    2016-05-01

    We propose a of sub-Doppler cooling mechanism that takes advantage of the unique spectral features and extreme dispersion generated by the so-called Electromagnetically Induced Transparency (EIT) effect, a destructive quantum interference phenomenon experienced by atoms with Lambda-shaped energy levels when illuminated by two light fields with appropriate frequencies. By detuning the probe lasers slightly from the ``dark resonance'', we observe that atoms can be significantly cooled down by the strong viscous force within the transparency window, while being just slightly heated by the diffusion caused by the small absorption near resonance. In contrast to polarization gradient cooling or EIT sideband cooling, no external magnetic field or external confining potential are required. Using a semi-classical method, analytical expressions, and numerical simulations, we demonstrate that the proposed EIT cooling method can lead to temperatures well below the Doppler limit. This work is supported by NSF and NIST.

  6. Application of AWE for RCS Frequency Response Calculations Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.; Deshpande, M. D.

    1996-01-01

    An implementation of the Asymptotic Waveform Evaluation (AWE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFIE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current, thus obtained, is expanded in a Taylor series around the frequency of interest. The coefficients of the Taylor series (called 'moments') are obtained using the frequency derivatives of the EFIE. Using the moments, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. A good agreement between AWE and the exact solution over the bandwidth is observed.

  7. Prospects of application of additive technologies for increasing the efficiency of impeller machines

    NASA Astrophysics Data System (ADS)

    Belova, O. V.; Borisov, Yu. A.

    2017-08-01

    Impeller machine is a device in which the flow path carries out the supply (or retraction) of mechanical energy to the flow of a working fluid passing through the machine. To increase the efficiency of impeller machines, it is necessary to use design modern technologies, namely the use of numerical methods for conducting research in the field of gas dynamics, as well as additive manufacturing (AM) for the of both prototypes and production model. AM technologies are deservedly rightly called revolutionary because they give unique possibility for manufacturing products, creating perfect forms, both light and durable. The designers face the challenge of developing a new design methodology, since AM allows the use of the concept of "Complexity For Free". The "Complexity For Free" conception is based on: complexity of the form; hierarchical complexity; complexity of the material; functional complexity. The new technical items design method according to a functional principle is also investigated.

  8. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  9. Application of program generation technology in solving heat and flow problems

    NASA Astrophysics Data System (ADS)

    Wan, Shui; Wu, Bangxian; Chen, Ningning

    2007-05-01

    Based on a new DIY concept for software development, an automatic program-generating technology attached on a software system called as Finite Element Program Generator (FEPG) provides a platform of developing programs, through which a scientific researcher can submit his special physico-mathematical problem to the system in a more direct and convenient way for solution. For solving flow and heat problems by using finite element method, the stabilization technologies and fraction-step methods are adopted to overcome the numerical difficulties caused mainly due to the dominated convection. A couple of benchmark problems are given in this paper as examples to illustrate the usage and the superiority of the automatic program generation technique, including the flow in a lid-driven cavity, the starting flow in a circular pipe, the natural convection in a square cavity, and the flow past a circular cylinder, etc. They are also shown as the verification of the algorithms.

  10. Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction

    NASA Astrophysics Data System (ADS)

    Scarnati, Theresa; Gelb, Anne

    2018-04-01

    In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.

  11. Estimation of coupling efficiency of optical fiber by far-field method

    NASA Astrophysics Data System (ADS)

    Kataoka, Keiji

    2010-09-01

    Coupling efficiency to a single-mode optical fiber can be estimated with the field amplitudes at far-field of an incident beam and optical fiber mode. We call it the calculation by far-field method (FFM) in this paper. The coupling efficiency by FFM is formulated including effects of optical aberrations, vignetting of the incident beam, and misalignments of the optical fiber such as defocus, lateral displacements, and angle deviation in arrangement of the fiber. As the results, it is shown the coupling efficiency is proportional to the central intensity of the focused spot, i.e., Strehl intensity of a virtual beam determined by the incident beam and mode of the optical fiber. Using the FFM, a typical optics in which a laser beam is coupled to an optical fiber with a lens of finite numerical aperture (NA) is analyzed for several cases of amplitude distributions of the incident light.

  12. An algorithmic approach to solving polynomial equations associated with quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Zinin, M. V.

    2009-12-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.

  13. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  14. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    PubMed

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  15. Measurements of solar transition zone velocities and line broadening using the ultraviolet spectrometer and polarimeter on the Solar Maximum Mission

    NASA Technical Reports Server (NTRS)

    Simon, G.; Mein, P.; Vial, J. C.; Shine, R. A.; Woodgate, B. E.

    1982-01-01

    The UVSP instrument on SMM is able to observe solar regions at two wavelengths in the same line with a band-pass of 0.3 A. Intensity and Doppler velocity maps are derived. It is shown that the numerical values are sensitive to the adopted Doppler width and the range of velocities is limited to within 30 km/sec. A method called Double Dopplergram Determination (DDD) is described for deriving both the Doppler width and the velocity (up to 80 km/sec), and the main sources of uncertainties are discussed. To illustrate the method, a set of C IV 1548 A observations is analyzed according to this procedure. The mean C IV Doppler width measured (0.15 A) is comparable to previous determinations. A relation is found between bright regions and down-flows. Large Doppler widths correspond to strong velocity gradients.

  16. Fast generations of tree-type three-dimensional entanglement via Lewis-Riesenfeld invariants and transitionless quantum driving

    PubMed Central

    Wu, Jin-Lei; Ji, Xin; Zhang, Shou

    2016-01-01

    Recently, a novel three-dimensional entangled state called tree-type entanglement, which is likely to have applications for improving quantum communication security, was prepared via adiabatic passage by Song et al. Here we propose two schemes for fast generating tree-type three-dimensional entanglement among three spatially separated atoms via shortcuts to adiabatic passage. With the help of quantum Zeno dynamics, two kinds of different but equivalent methods, Lewis-Riesenfeld invariants and transitionless quantum driving, are applied to construct shortcuts to adiabatic passage. The comparisons between the two methods are discussed. The strict numerical simulations show that the tree-type three-dimensional entangled states can be fast prepared with quite high fidelities and the two schemes are both robust against the variations in the parameters, atomic spontaneous emissions and the cavity-fiber photon leakages. PMID:27667583

  17. A bi-objective model for robust yard allocation scheduling for outbound containers

    NASA Astrophysics Data System (ADS)

    Liu, Changchun; Zhang, Canrong; Zheng, Li

    2017-01-01

    This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.

  18. New imaging algorithm in diffusion tomography

    NASA Astrophysics Data System (ADS)

    Klibanov, Michael V.; Lucas, Thomas R.; Frank, Robert M.

    1997-08-01

    A novel imaging algorithm for diffusion/optical tomography is presented for the case of the time dependent diffusion equation. Numerical tests are conducted for ranges of parameters realistic for applications to an early breast cancer diagnosis using ultrafast laser pulses. This is a perturbation-like method which works for both homogeneous a heterogeneous background media. Its main innovation lies in a new approach for a novel linearized problem (LP). Such an LP is derived and reduced to a boundary value problem for a coupled system of elliptic partial differential equations. As is well known, the solution of such a system amounts to the factorization of well conditioned, sparse matrices with few non-zero entries clustered along the diagonal, which can be done very rapidly. Thus, the main advantages of this technique are that it is fast and accurate. The authors call this approach the elliptic systems method (ESM). The ESM can be extended for other data collection schemes.

  19. Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI.

    PubMed

    Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi

    2009-01-01

    Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.

  20. Why Is the Overheating Problem Difficult: the Role of Entropy

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2013-01-01

    The development of computational fluid dynamics over the last few decades has yielded enormous successes and capabilities being routinely employed today; however there remain some open problems to be properly resolved-some are fundamental in nature and some resolvable by operational changes. These two categories are distinguished and broadly explored previously. One, that belongs to the former, is the so-called overheating problem, especially in rarefying flow. This problem up to date still dogs every method known to the author; a solution to it remains elusive. The study in this paper concludes that: (1) the entropy increase is quantitatively linked to the increase in the temperature increase, (2) it is argued that the overheating is inevitable in the current shock capturing or traditional finite difference framework, and (3) a simple hybrid method is proposed that removes the overheating problem in the rarefying problems, but also retains the property of accurate shock capturing. This remedy (enhancement of current numerical methods) can be included easily in the present Eulerian codes.

  1. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  2. Modal Decomposition of TTV: Inferring Planet Masses and Eccentricities

    NASA Astrophysics Data System (ADS)

    Linial, Itai; Gilbaum, Shmuel; Sari, Re’em

    2018-06-01

    Transit timing variations (TTVs) are a powerful tool for characterizing the properties of transiting exoplanets. However, inferring planet properties from the observed timing variations is a challenging task, which is usually addressed by extensive numerical searches. We propose a new, computationally inexpensive method for inverting TTV signals in a planetary system of two transiting planets. To the lowest order in planetary masses and eccentricities, TTVs can be expressed as a linear combination of three functions, which we call the TTV modes. These functions depend only on the planets’ linear ephemerides, and can be either constructed analytically, or by performing three orbital integrations of the three-body system. Given a TTV signal, the underlying physical parameters are found by decomposing the data as a sum of the TTV modes. We demonstrate the use of this method by inferring the mass and eccentricity of six Kepler planets that were previously characterized in other studies. Finally we discuss the implications and future prospects of our new method.

  3. CSM research: Methods and application studies

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.

    1989-01-01

    Computational mechanics is that discipline of applied science and engineering devoted to the study of physical phenomena by means of computational methods based on mathematical modeling and simulation, utilizing digital computers. The discipline combines theoretical and applied mechanics, approximation theory, numerical analysis, and computer science. Computational mechanics has had a major impact on engineering analysis and design. When applied to structural mechanics, the discipline is referred to herein as computational structural mechanics. Complex structures being considered by NASA for the 1990's include composite primary aircraft structures and the space station. These structures will be much more difficult to analyze than today's structures and necessitate a major upgrade in computerized structural analysis technology. NASA has initiated a research activity in structural analysis called Computational Structural Mechanics (CSM). The broad objective of the CSM activity is to develop advanced structural analysis technology that will exploit modern and emerging computers, such as those with vector and/or parallel processing capabilities. Here, the current research directions for the Methods and Application Studies Team of the Langley CSM activity are described.

  4. BetaTPred: prediction of beta-TURNS in a protein using statistical algorithms.

    PubMed

    Kaur, Harpreet; Raghava, G P S

    2002-03-01

    beta-turns play an important role from a structural and functional point of view. beta-turns are the most common type of non-repetitive structures in proteins and comprise on average, 25% of the residues. In the past numerous methods have been developed to predict beta-turns in a protein. Most of these prediction methods are based on statistical approaches. In order to utilize the full potential of these methods, there is a need to develop a web server. This paper describes a web server called BetaTPred, developed for predicting beta-TURNS in a protein from its amino acid sequence. BetaTPred allows the user to predict turns in a protein using existing statistical algorithms. It also allows to predict different types of beta-TURNS e.g. type I, I', II, II', VI, VIII and non-specific. This server assists the users in predicting the consensus beta-TURNS in a protein. The server is accessible from http://imtech.res.in/raghava/betatpred/

  5. Identification of potential recovery facilities for designing a reverse supply chain network using physical programming

    NASA Astrophysics Data System (ADS)

    Pochampally, Kishore K.; Gupta, Surendra M.; Kamarthi, Sagar V.

    2004-02-01

    Although there are many quantitative models in the literature to design a reverse supply chain, every model assumes that all the recovery facilities that are engaged in the supply chain have enough potential to efficiently re-process the incoming used products. Motivated by the risk of re-processing used products in facilities of insufficient potentiality, this paper proposes a method to identify potential facilities in a set of candidate recovery facilities operating in a region where a reverse supply chain is to be established. In this paper, the problem is solved using a newly developed method called physical programming. The most significant advantage of using physical programming is that it allows a decision maker to express his preferences for values of criteria (for comparing the alternatives), not in the traditional form of weights but in terms of ranges of different degrees of desirability, such as ideal range, desirable range, highly desirable range, undesirable range, and unacceptable range. A numerical example is considered to illustrate the proposed method.

  6. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou Yu, E-mail: yzou@Princeton.ED; Kavousanakis, Michail E., E-mail: mkavousa@Princeton.ED; Kevrekidis, Ioannis G., E-mail: yannis@Princeton.ED

    2010-07-20

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations bymore » exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.« less

  7. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  8. Crystalline phases by an improved gradient expansion technique

    NASA Astrophysics Data System (ADS)

    Carignano, S.; Mannarelli, M.; Anzuini, F.; Benhar, O.

    2018-02-01

    We develop an innovative technique for studying inhomogeneous phases with a spontaneous broken symmetry. The method relies on the knowledge of the exact form of the free energy in the homogeneous phase and on a specific gradient expansion of the order parameter. We apply this method to quark matter at vanishing temperature and large chemical potential, which is expected to be relevant for astrophysical considerations. The method is remarkably reliable and fast as compared to performing the full numerical diagonalization of the quark Hamiltonian in momentum space and is designed to improve the standard Ginzburg-Landau expansion close to the phase transition points. For definiteness, we focus on inhomogeneous chiral symmetry breaking, accurately reproducing known results for one-dimensional and two-dimensional modulations and examining novel crystalline structures, as well. Consistently with previous results, we find that the energetically favored modulation is the so-called one-dimensional real-kink crystal. We propose a qualitative description of the pairing mechanism to motivate this result.

  9. Thermal measurement of brake pad lining surfaces during the braking process

    NASA Astrophysics Data System (ADS)

    Piątkowski, Tadeusz; Polakowski, Henryk; Kastek, Mariusz; Baranowski, Pawel; Damaziak, Krzysztof; Małachowski, Jerzy; Mazurkiewicz, Łukasz

    2012-06-01

    This paper presents the test campaign concept and definition and the analysis of the recorded measurements. One of the most important systems in cars and trucks are brakes. The braking temperature on a lining surface can rise above 500°C. This shows how linings requirements are so strict and, what is more, continuously rising. Besides experimental tests, very supportive method for investigating processes which occur on the brake pad linings are numerical analyses. Experimental tests were conducted on the test machine called IL-68. The main component of IL-68 is so called frictional unit, which consists of: rotational head, which convey a shaft torque and where counter samples are placed and translational head, where samples of coatings are placed and pressed against counter samples. Due to the high rotational speeds and thus the rapid changes in temperature field, the infrared camera was used for testing. The paper presents results of analysis registered thermograms during the tests with different conditions. Furthermore, based on this testing machine, the numerical model was developed. In order to avoid resource demanding analyses only the frictional unit (described above) was taken into consideration. Firstly the geometrical model was performed thanks to CAD techniques, which in the next stage was a base for developing the finite element model. Material properties and boundary conditions exactly correspond to experimental tests. Computations were performed using a dynamic LS-Dyna code where heat generation was estimated assuming full (100%) conversion of mechanical work done by friction forces. Paper presents the results of dynamic thermomechanical analysis too and these results were compared with laboratory tests.

  10. Clumps of Cold Stuff Across the Sky

    NASA Image and Video Library

    2011-01-11

    This map illustrates the numerous star-forming clouds, called cold cores, that European Space Agency Planck observed throughout our Milky Way galaxy. Planck detected around 10,000 of these cores, thousands of which had never been seen before.

  11. Pterocarpus officinalis Jacq. Bloodwood Legumeminosae, Legume Family, lotoideae, Pea Subfamily

    Treesearch

    Peter L. Weaver

    1997-01-01

    Pterocarpus officinalis Jacq., called palo de pollo in Puerto Rico, bloodwood in Guyana and Panama, and by numerous other names throughout its extensive range, is an evergreen tree that reaches 40m in height

  12. NAPL: SIMULATOR DOCUMENTATION

    EPA Science Inventory

    A mathematical and numerical model is developed to simulate the transport and fate of NAPLs (Non-Aqueous Phase Liquids) in near-surface granular soils. The resulting three-dimensional, three phase simulator is called NAPL. The simulator accommodates three mobile phases: water, NA...

  13. Numerical Procedures for Analyzing Dynamical Processes.

    DTIC Science & Technology

    1992-02-29

    different in nature and can be of the third coordinate of the numerically calcu- called crnamic in that information about the dy- lated solution. Such...recover the matrix A by changing coordinates back to the original basis. "The points x, are points on the attractor which are not For example, if we...the attractor contained witun a small distance (of rotate the coordinate axes by 45’, The dynamics Xrer. In this notation. x, and y, are consecutive

  14. Normal forms of dispersive scalar Poisson brackets with two independent variables

    NASA Astrophysics Data System (ADS)

    Carlet, Guido; Casati, Matteo; Shadrin, Sergey

    2018-03-01

    We classify the dispersive Poisson brackets with one dependent variable and two independent variables, with leading order of hydrodynamic type, up to Miura transformations. We show that, in contrast to the case of a single independent variable for which a well-known triviality result exists, the Miura equivalence classes are parametrised by an infinite number of constants, which we call numerical invariants of the brackets. We obtain explicit formulas for the first few numerical invariants.

  15. Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.

    PubMed

    Chmelnitsky, Elly G; Ferguson, Steven H

    2012-06-01

    Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.

  16. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  17. High Order Discontinuous Gelerkin Methods for Convection Dominated Problems with Application to Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2000-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  18. Application of advanced grid generation techniques for flow field computations about complex configurations

    NASA Technical Reports Server (NTRS)

    Kathong, Monchai; Tiwari, Surendra N.

    1988-01-01

    In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.

  19. CaveMan Enterprise version 1.0 Software Validation and Verification.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, David

    The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prisemore » has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solis, S. E.; Centro de Investigacion e Instrumentacion e Imagenologia Medica, Universidad Autonoma Metropolitana Iztapalapa, Mexico, DF 09340; Hernandez, J. A.

    Arrays of antennas have been widely accepted for magnetic resonance imaging applications due to their high signal-to-noise ratio (SNR) over large volumes of interest. A new surface coil based on the magnetron tube and called slotted surface coil, has been recently introduced by our group. This coil design experimentally demonstrated a significant improvement over the circular-shaped coil when used in the receive-only mode. The slotted coils formed a two-sheet structure with a 90 deg. separation and each coil had 6 circular slots. Numerical simulations were performed using the finite element method for this coil design to study the behaviour ofmore » the array magnetic field. Then, we developed a two-coil array for brain magnetic resonance imaging to be operated at the resonant frequency of 170 MHz in the transceiver mode. Phantom images were acquired with our coil array and standard pulse sequences on a research-dedicated 4 Tesla scanner. Numerical simulations demonstrated that electromagnetic interaction between the coil elements is negligible, and that the magnetic field showed a good uniformity. In vitro images showed the feasibility of this coil array for standard pulses for high field magnetic resonance imaging.« less

  1. Numerical Evaluation of Dynamic Response for Flexible Composite Structures under Slamming Impact for Naval Applications

    NASA Astrophysics Data System (ADS)

    Hassoon, O. H.; Tarfaoui, M.; El Moumen, A.; Benyahia, H.; Nachtane, M.

    2018-06-01

    The deformable composite structures subjected to water-entry impact can be caused a phenomenon called hydroelastic effect, which can modified the fluid flow and estimated hydrodynamic loads comparing with rigid body. This is considered very important for ship design engineers to predict the global and the local hydrodynamic loads. This paper presents a numerical model to simulate the slamming water impact of flexible composite panels using an explicit finite element method. In order to better describe the hydroelastic influence and mechanical properties, composite materials panels with different stiffness and under different impact velocities with deadrise angle of 100 have been studied. In the other hand, the inertia effect was observed in the early stage of the impact that relative to the loading rate. Simulation results have been indicated that the lower stiffness panel has a higher hydroelastic effect and becomes more important when decreasing of the deadrise angle and increasing the impact velocity. Finally, the simulation results were compared with the experimental data and the analytical approaches of the rigid body to describe the behavior of the hydroelastic influence.

  2. Viscoelasticity of multiphase fluids: future directions

    NASA Astrophysics Data System (ADS)

    Tisato, Nicola; Spikes, Kyle; Javadpour, Farzam

    2016-04-01

    Recently, it has been demonstrated that rocks saturated with bubbly fluids attenuate seismic waves as the propagating elastic wave causes a thermodynamic disequilibrium between the liquid and the gas phases. The new attenuation mechanism, which is called wave-induced-gas-exsolution-dissolution (WIGED) and previously, was only postulated, opens up new perspectives for exploration geophysics as it could potentially improve the imaging of the subsurface. In particular, accounting for WIGED during seismic inversion could allow to better decipher seismic waves to disclose information about saturating phases. This will improve, for instance, the mapping of subsurface gas-plumes that might form during anthropogenic activities or natural phenomena such as those prior to volcanic eruptions. In the present contribution we will report the theory and the numerical method utilized to calculate the seismic-wave-attenuation related to WIGED and we will underline the assumptions and the limitations related to the theory. Then, we will present the experimental and the numerical strategy that we will employ to improve WIGED theory in order to incorporate additional effects, such as the role of interfacial tensions, or to extend it to fluid-fluid interaction

  3. Thermofluid Analysis of Magnetocaloric Refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan

    While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less

  4. Numerical Evaluation of Dynamic Response for Flexible Composite Structures under Slamming Impact for Naval Applications

    NASA Astrophysics Data System (ADS)

    Hassoon, O. H.; Tarfaoui, M.; El Moumen, A.; Benyahia, H.; Nachtane, M.

    2017-10-01

    The deformable composite structures subjected to water-entry impact can be caused a phenomenon called hydroelastic effect, which can modified the fluid flow and estimated hydrodynamic loads comparing with rigid body. This is considered very important for ship design engineers to predict the global and the local hydrodynamic loads. This paper presents a numerical model to simulate the slamming water impact of flexible composite panels using an explicit finite element method. In order to better describe the hydroelastic influence and mechanical properties, composite materials panels with different stiffness and under different impact velocities with deadrise angle of 100 have been studied. In the other hand, the inertia effect was observed in the early stage of the impact that relative to the loading rate. Simulation results have been indicated that the lower stiffness panel has a higher hydroelastic effect and becomes more important when decreasing of the deadrise angle and increasing the impact velocity. Finally, the simulation results were compared with the experimental data and the analytical approaches of the rigid body to describe the behavior of the hydroelastic influence.

  5. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  6. High-resolution nondestructive testing of multilayer dielectric materials using wideband microwave synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Kim, Tae Hee; James, Robin; Narayanan, Ram M.

    2017-04-01

    Fiber Reinforced Polymer or Plastic (FRP) composites have been rapidly increasing in the aerospace, automotive and marine industry, and civil engineering, because these composites show superior characteristics such as outstanding strength and stiffness, low weight, as well as anti-corrosion and easy production. Generally, the advancement of materials calls for correspondingly advanced methods and technologies for inspection and failure detection during production or maintenance, especially in the area of nondestructive testing (NDT). Among numerous inspection techniques, microwave sensing methods can be effectively used for NDT of FRP composites. FRP composite materials can be produced using various structures and materials, and various defects or flaws occur due to environmental conditions encountered during operation. However, reliable, low-cost, and easy-to-operate NDT methods have not been developed and tested. FRP composites are usually produced as multilayered structures consisting of fiber plate, matrix and core. Therefore, typical defects appearing in FRP composites are disbondings, delaminations, object inclusions, and certain kinds of barely visible impact damages. In this paper, we propose a microwave NDT method, based on synthetic aperture radar (SAR) imaging algorithms, for stand-off imaging of internal delaminations. When a microwave signal is incident on a multilayer dielectric material, the reflected signal provides a good response to interfaces and transverse cracks. An electromagnetic wave model is introduced to delineate interface widths or defect depths from the reflected waves. For the purpose of numerical analysis and simulation, multilayered composite samples with various artificial defects are assumed, and their SAR images are obtained and analyzed using a variety of high-resolution wideband waveforms.

  7. Numerical Study of Interaction of a Vortical Density Inhomogeneity with Shock and Expansion Waves

    NASA Technical Reports Server (NTRS)

    Povitsky, A.; Ofengeim, D.

    1998-01-01

    We studied the interaction of a vortical density inhomogeneity (VDI) with shock and expansion waves. We call the VDI the region of concentrated vorticity (vortex) with a density different from that of ambiance. Non-parallel directions of the density gradient normal to the VDI surface and the pressure gradient across a shock wave results in an additional vorticity. The roll-up of the initial round VDI towards a non-symmetrical shape is studied numerically. Numerical modeling of this interaction is performed by a 2-D Euler code. The use of an adaptive unstructured numerical grid makes it possible to obtain high accuracy and capture regions of induced vorticity with a moderate overall number of mesh points. For the validation of the code, the computational results are compared with available experimental results and good agreement is obtained. The interaction of the VDI with a propagating shock wave is studied for a range of initial and induced circulations and obtained flow patterns are presented. The splitting of the VDI develops into the formation of a non-symmetrical vortex pair and not in a set of vortices. A method for the analytical computation of an overall induced circulation Gamma(sub 1) as a result of the interaction of a moving VDI with a number of waves is proposed. Simplified, approximated, expressions for Gamma(sub 1) are derived and their accuracy is discussed. The splitting of the VDI passing through the Prandtl-Meyer expansion wave is studied numerically. The obtained VDI patterns are compared to those for the interaction of the VDI with a propagating shock wave for the same values of initial and induced circulations. These patterns have similar shapes for corresponding time moments.

  8. Experimental and numerical analysis of interlocking rib formation at sheet metal blanking

    NASA Astrophysics Data System (ADS)

    Bolka, Špela; Bratuš, Vitoslav; Starman, Bojan; Mole, Nikolaj

    2018-05-01

    Cores for electrical motors are typically produced by blanking of laminations and then stacking them together, with, for instance, interlocking ribs or welding. Strict geometrical tolerances, both on the lamination and on the stack, combined with complex part geometry and harder steel strip material, call for use of predictive methods to optimize the process before actual blanking to reduce the costs and speed up the process. One of the major influences on the final stack geometry is the quality of the interlocking ribs. A rib is formed in one step and joined with the rib of the preceding lamination in the next. The quality of the joint determines the firmness of the stack and also influences its. The geometrical and positional accuracy is thus crucial in rib formation process. In this study, a complex experimental and numerical analysis of interlocking rib formation has been performed. The aim of the analysis is to numerically predict the shape of the rib in order to perform a numerical simulation of the stack formation in the next step of the process. A detailed experimental research has been performed in order to characterize influential parameters on the rib formation and the geometry of the ribs itself, using classical and 3D laser microscopy. The formation of the interlocking rib is then simulated using Abaqus Explicit. The Hilll 48 constitutive material model is based on extensive and novel material characterization process, combining data from in-plane and out-of-plane material tests to perform a 3D analysis of both, rib formation and rib joining. The study shows good correlation between the experimental and numerical results.

  9. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE PAGES

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-01-01

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  10. Revealing the physical insight of a length-scale parameter in metamaterials by exploiting the variational formulation

    NASA Astrophysics Data System (ADS)

    Abali, B. Emek

    2018-04-01

    For micro-architectured materials with a substructure, called metamaterials, we can realize a direct numerical simulation in the microscale by using classical mechanics. This method is accurate, however, computationally costly. Instead, a solution of the same problem in the macroscale is possible by means of the generalized mechanics. In this case, no detailed modeling of the substructure is necessary; however, new parameters emerge. A physical interpretation of these metamaterial parameters is challenging leading to a lack of experimental strategies for their determination. In this work, we exploit the variational formulation based on action principles and obtain a direct relation between a parameter used in the kinetic energy and a metamaterial parameter in the case of a viscoelastic model.

  11. Magnus-induced ratchet effects for skyrmions interacting with asymmetric substrates

    NASA Astrophysics Data System (ADS)

    Reichhardt, C.; Ray, D.; Olson Reichhardt, C. J.

    2015-07-01

    We show using numerical simulations that pronounced ratchet effects can occur for ac driven skyrmions moving over asymmetric quasi-one-dimensional substrates. We find a new type of ratchet effect called a Magnus-induced transverse ratchet that arises when the ac driving force is applied perpendicular rather than parallel to the asymmetry direction of the substrate. This transverse ratchet effect only occurs when the Magnus term is finite, and the threshold ac amplitude needed to induce it decreases as the Magnus term becomes more prominent. Ratcheting skyrmions follow ordered orbits in which the net displacement parallel to the substrate asymmetry direction is quantized. Skyrmion ratchets represent a new ac current-based method for controlling skyrmion positions and motion for spintronic applications.

  12. GridLAB-D: An Agent-Based Simulation Framework for Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Fuller, Jason C.; Djilali, Ned

    2014-06-23

    Simulation of smart grid technologies requires a fundamentally new approach to integrated modeling of power systems, energy markets, building technologies, and the plethora of other resources and assets that are becoming part of modern electricity production, delivery, and consumption systems. As a result, the US Department of Energy’s Office of Electricity commissioned the development of a new type of power system simulation tool called GridLAB-D that uses an agent-based approach to simulating smart grids. This paper presents the numerical methods and approach to time-series simulation used by GridLAB-D and reviews applications in power system studies, market design, building control systemmore » design, and integration of wind power in a smart grid.« less

  13. One in a Million Given the Accident: Assuring Nuclear Weapon Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Jason

    2015-08-25

    Since the introduction of nuclear weapons, there has not been a single instance of accidental or unauthorized nuclear detonation, but there have been numerous accidents and “close calls.” As the understanding of these environments has increased, the need for a robust nuclear weapon safety philosophy has grown. This paper describes some of the methods used by the Nuclear Weapon Complex today to assure nuclear weapon safety, including testing, modeling, analysis, and design features. Lastly, it also reviews safety’s continued role in the future and examines how nuclear safety’s present maturity can play a role in strengthening security and other areasmore » and how increased coordination can improve safety and reduce long-term cost.« less

  14. Reshaping of large aeronautical structural parts: A simplified simulation approach

    NASA Astrophysics Data System (ADS)

    Mena, Ramiro; Aguado, José V.; Guinard, Stéphane; Huerta, Antonio

    2018-05-01

    Large aeronautical structural parts present important distortions after machining. This problem is caused by the presence of residual stresses, which are developed during previous manufacturing steps (quenching). Before being put into service, the nominal geometry is restored by means of mechanical methods. This operation is called reshaping and exclusively depends on the skills of a well-trained and experienced operator. Moreover, this procedure is time consuming and nowadays, it is only based on a trial and error approach. Therefore, there is a need at industrial level to solve this problem with the support of numerical simulation tools. By using a simplification hypothesis, it was found that the springback phenomenon behaves linearly and it allows developing a strategy to implement reshaping at an industrial level.

  15. Extending compile-time reverse mode and exploiting partial separability in ADIFOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, C.H.; El-Khadiri, M.

    1992-10-01

    The numerical methods employed in the solution of many scientific computing problems require the computation of the gradient of a function f: R[sup n] [yields] R. ADIFOR is a source translator that, given a collection of subroutines to compute f, generates Fortran 77 code for computing the derivative of this function. Using the so-called torsion problem from the MINPACK-2 test collection as an example, this paper explores two issues in automatic differentiation: the efficient computation of derivatives for partial separable functions and the use of the compile-time reverse mode for the generation of derivatives. We show that orders of magnitudesmore » of improvement are possible when exploiting partial separability and maximizing use of the reverse mode.« less

  16. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  17. Determination of optimal tool parameters for hot mandrel bending of pipe elbows

    NASA Astrophysics Data System (ADS)

    Tabakajew, Dmitri; Homberg, Werner

    2018-05-01

    Seamless pipe elbows are important components in mechanical, plant and apparatus engineering. Typically, they are produced by the so-called `Hamburg process'. In this hot forming process, the initial pipes are subsequently pushed over an ox-horn-shaped bending mandrel. The geometric shape of the mandrel influences the diameter, bending radius and wall thickness distribution of the pipe elbow. This paper presents the numerical simulation model of the hot mandrel bending process created to ensure that the optimum mandrel geometry can be determined at an early stage. A fundamental analysis was conducted to determine the influence of significant parameters on the pipe elbow quality. The chosen methods and approach as well as the corresponding results are described in this paper.

  18. On cat's eyes and multiple disjoint cells natural convection flow in tall tilted cavities

    NASA Astrophysics Data System (ADS)

    Báez, Elsa; Nicolás, Alfredo

    2014-10-01

    Natural convection fluid flow in air-filled tall tilted cavities is studied numerically with a direct projection method applied on the unsteady Boussinesq approximation in primitive variables. The study is focused on the so called cat's eyes and multiple disjoint cells as the aspect ratio A and the angle of inclination ϕ of the cavity vary. Results have already been reported with primitive and stream function-vorticity variables. The former are validated with the latter ones, which in turn were validated through mesh size and time-step independence studies. The new results complemented with the previous ones lead to find out the fluid motion and heat transfer invariant properties of this thermal phenomenon, which is the novelty here.

  19. On the convergence of a fully discrete scheme of LES type to physically relevant solutions of the incompressible Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Berselli, Luigi C.; Spirito, Stefano

    2018-06-01

    Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.

  20. Microfabricated ommatidia using a laser induced self-writing process for high resolution artificial compound eye optical systems.

    PubMed

    Jung, Hyukjin; Jeong, Ki-Hun

    2009-08-17

    A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America

Top