Sample records for efficient pseudospectral method

  1. Application of the Fourier pseudospectral time-domain method in orthogonal curvilinear coordinates for near-rigid moderately curved surfaces.

    PubMed

    Hornikx, Maarten; Dragna, Didier

    2015-07-01

    The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.

  2. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  3. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vay, Jean-Luc, E-mail: jlvay@lbl.gov; Haber, Irving; Godfrey, Brendan B.

    Pseudo-spectral electromagnetic solvers (i.e. representing the fields in Fourier space) have extraordinary precision. In particular, Haber et al. presented in 1973 a pseudo-spectral solver that integrates analytically the solution over a finite time step, under the usual assumption that the source is constant over that time step. Yet, pseudo-spectral solvers have not been widely used, due in part to the difficulty for efficient parallelization owing to global communications associated with global FFTs on the entire computational domains. A method for the parallelization of electromagnetic pseudo-spectral solvers is proposed and tested on single electromagnetic pulses, and on Particle-In-Cell simulations of themore » wakefield formation in a laser plasma accelerator. The method takes advantage of the properties of the Discrete Fourier Transform, the linearity of Maxwell’s equations and the finite speed of light for limiting the communications of data within guard regions between neighboring computational domains. Although this requires a small approximation, test results show that no significant error is made on the test cases that have been presented. The proposed method opens the way to solvers combining the favorable parallel scaling of standard finite-difference methods with the accuracy advantages of pseudo-spectral methods.« less

  5. Accurate modeling of plasma acceleration with arbitrary order pseudo-spectral particle-in-cell methods

    DOE PAGES

    Jalas, S.; Dornmair, I.; Lehe, R.; ...

    2017-03-20

    Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less

  6. Fuel-Optimal Altitude Maintenance of Low-Earth-Orbit Spacecrafts by Combined Direct/Indirect Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Kyung-Ha; Park, Chandeok; Park, Sang-Young

    2015-12-01

    This work presents fuel-optimal altitude maintenance of Low-Earth-Orbit (LEO) spacecrafts experiencing non-negligible air drag and J2 perturbation. A pseudospectral (direct) method is first applied to roughly estimate an optimal fuel consumption strategy, which is employed as an initial guess to precisely determine itself. Based on the physical specifications of KOrea Multi-Purpose SATellite-2 (KOMPSAT-2), a Korean artificial satellite, numerical simulations show that a satellite ascends with full thrust at the early stage of the maneuver period and then descends with null thrust. While the thrust profile is presumably bang-off, it is difficult to precisely determine the switching time by using a pseudospectral method only. This is expected, since the optimal switching epoch does not coincide with one of the collocation points prescribed by the pseudospectral method, in general. As an attempt to precisely determine the switching time and the associated optimal thrust history, a shooting (indirect) method is then employed with the initial guess being obtained through the pseudospectral method. This hybrid process allows the determination of the optimal fuel consumption for LEO spacecrafts and their thrust profiles efficiently and precisely.

  7. Using polarized Raman spectroscopy and the pseudospectral method to characterize molecular structure and function

    NASA Astrophysics Data System (ADS)

    Weisman, Andrew L.

    Electronic structure calculation is an essential approach for determining the structure and function of molecules and is therefore of critical interest to physics, chemistry, and materials science. Of the various algorithms for calculating electronic structure, the pseudospectral method is among the fastest. However, the trade-off for its speed is more up-front programming and testing, and as a result, applications using the pseudospectral method currently lag behind those using other methods. In Part I of this dissertation, we first advance the pseudospectral method by optimizing it for an important application, polarized Raman spectroscopy, which is a well-established tool used to characterize molecular properties. This is an application of particular importance because often the easiest and most economical way to obtain the polarized Raman spectrum of a material is to simulate it; thus, utilization of the pseudospectral method for this purpose will accelerate progress in the determination of molecular properties. We demonstrate that our implementation of Raman spectroscopy using the pseudospectral method results in spectra that are just as accurate as those calculated using the traditional analytic method, and in the process, we derive the most comprehensive formulation to date of polarized Raman intensity formulas, applicable to both crystalline and isotropic systems. Next, we apply our implementation to determine the orientations of crystalline oligothiophenes -- a class of materials important in the field of organic electronics -- achieving excellent agreement with experiment and demonstrating the general utility of polarized Raman spectroscopy for the determination of crystal orientation. In addition, we derive from first-principles a method for using polarized Raman spectra to establish unambiguously whether a uniform region of a material is crystalline or isotropic. Finally, we introduce free, open-source software that allows a user to determine any of a number of polarized Raman properties of a sample given common output from electronic structure calculations. In Part II, we apply the pseudospectral method to other areas of scientific importance requiring a deeper understanding of molecular structure and function. First, we use it to accurately determine the frequencies of vibrational tags on biomolecules that can be detected in real-time using stimulated Raman spectroscopy. Next, we evaluate the performance of the pseudospectral method for calculating excited-state energies and energy gradients of large molecules -- another new application of the pseudospectral method -- showing that the calculations run much more quickly than those using the analytic method. Finally, we use the pseudospectral method to simulate the bottleneck process of a solar cell used for water splitting, a promising technology for converting the sun's energy into hydrogen fuel. We apply the speed of the pseudospectral method by modeling the relevant part of the system as a large, explicitly passivated titanium dioxide nanoparticle and simulating it realistically using hybrid density functional theory with an implicit solvent model, yielding insight into the physical nature of the rate-limiting step of water splitting. These results further validate the particularly fast and accurate simulation methodologies used, opening the door to efficient and realistic cluster-based, fully quantum-mechanical simulations of the bottleneck process of a promising technology for clean solar energy conversion. Taken together, we show how both polarized Raman spectroscopy and the pseudospectral method are effective tools for analyzing the structure and function of important molecular systems.

  8. A mixed pseudospectral/finite difference method for the axisymmetric flow in a heated, rotating spherical shell. [for experimental atmospheric simulation

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    1986-01-01

    For a Spacelab flight, a model experiment of the earth's atmospheric circulation has been proposed. This experiment is known as the Atmospheric General Circulation Experiment (AGCE). In the experiment concentric spheres will rotate as a solid body, while a dielectric fluid is confined in a portion of the gap between the spheres. A zero gravity environment will be required in the context of the simulation of the gravitational body force on the atmosphere. The present study is concerned with the development of pseudospectral/finite difference (PS/FD) model and its subsequent application to physical cases relevant to the AGCE. The model is based on a hybrid scheme involving a pseudospectral latitudinal formulation, and finite difference radial and time discretization. The advantages of the use of the hybrid PS/FD method compared to a pure second-order accurate finite difference (FD) method are discussed, taking into account the higher accuracy and efficiency of the PS/FD method.

  9. Pseudospectral reverse time migration based on wavefield decomposition

    NASA Astrophysics Data System (ADS)

    Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang

    2017-05-01

    The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.

  10. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1986-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  11. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, H.

    1984-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  12. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  13. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  14. An effective pseudospectral method for constraint dynamic optimisation problems with characteristic times

    NASA Astrophysics Data System (ADS)

    Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin

    2018-03-01

    Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.

  15. Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Jaros, Jiri; Rendell, Alistair P; Cox, B T

    2012-06-01

    The simulation of nonlinear ultrasound propagation through tissue realistic media has a wide range of practical applications. However, this is a computationally difficult problem due to the large size of the computational domain compared to the acoustic wavelength. Here, the k-space pseudospectral method is used to reduce the number of grid points required per wavelength for accurate simulations. The model is based on coupled first-order acoustic equations valid for nonlinear wave propagation in heterogeneous media with power law absorption. These are derived from the equations of fluid mechanics and include a pressure-density relation that incorporates the effects of nonlinearity, power law absorption, and medium heterogeneities. The additional terms accounting for convective nonlinearity and power law absorption are expressed as spatial gradients making them efficient to numerically encode. The governing equations are then discretized using a k-space pseudospectral technique in which the spatial gradients are computed using the Fourier-collocation method. This increases the accuracy of the gradient calculation and thus relaxes the requirement for dense computational grids compared to conventional finite difference methods. The accuracy and utility of the developed model is demonstrated via several numerical experiments, including the 3D simulation of the beam pattern from a clinical ultrasound probe.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalas, S.; Dornmair, I.; Lehe, R.

    Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less

  17. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  18. Pseudospectral method for gravitational wave collapse

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Weyhausen, Andreas; Brügmann, Bernd

    2016-03-01

    We present a new pseudospectral code, bamps, for numerical relativity written with the evolution of collapsing gravitational waves in mind. We employ the first-order generalized harmonic gauge formulation. The relevant theory is reviewed, and the numerical method is critically examined and specialized for the task at hand. In particular, we investigate formulation parameters—gauge- and constraint-preserving boundary conditions well suited to nonvanishing gauge source functions. Different types of axisymmetric twist-free moment-of-time-symmetry gravitational wave initial data are discussed. A treatment of the axisymmetric apparent horizon condition is presented with careful attention to regularity on axis. Our apparent horizon finder is then evaluated in a number of test cases. Moving on to evolutions, we investigate modifications to the generalized harmonic gauge constraint damping scheme to improve conservation in the strong-field regime. We demonstrate strong-scaling of our pseudospectral penalty code. We employ the Cartoon method to efficiently evolve axisymmetric data in our 3 +1 -dimensional code. We perform test evolutions of the Schwarzschild spacetime perturbed by gravitational waves and by gauge pulses, both to demonstrate the use of our black-hole excision scheme and for comparison with earlier results. Finally, numerical evolutions of supercritical Brill waves are presented to demonstrate durability of the excision scheme for the dynamical formation of a black hole.

  19. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  20. New Operational Matrices for Solving Fractional Differential Equations on the Half-Line

    PubMed Central

    2015-01-01

    In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques. PMID:25996369

  1. New operational matrices for solving fractional differential equations on the half-line.

    PubMed

    Bhrawy, Ali H; Taha, Taha M; Alzahrani, Ebraheem O; Alzahrani, Ebrahim O; Baleanu, Dumitru; Alzahrani, Abdulrahim A

    2015-01-01

    In this paper, the fractional-order generalized Laguerre operational matrices (FGLOM) of fractional derivatives and fractional integration are derived. These operational matrices are used together with spectral tau method for solving linear fractional differential equations (FDEs) of order ν (0 < ν < 1) on the half line. An upper bound of the absolute errors is obtained for the approximate and exact solutions. Fractional-order generalized Laguerre pseudo-spectral approximation is investigated for solving nonlinear initial value problem of fractional order ν. The extension of the fractional-order generalized Laguerre pseudo-spectral method is given to solve systems of FDEs. We present the advantages of using the spectral schemes based on fractional-order generalized Laguerre functions and compare them with other methods. Several numerical examples are implemented for FDEs and systems of FDEs including linear and nonlinear terms. We demonstrate the high accuracy and the efficiency of the proposed techniques.

  2. Convergence results for pseudospectral approximations of hyperbolic systems by a penalty type boundary treatment

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele; Gottlieb, David

    1989-01-01

    A new method of imposing boundary conditions in the pseudospectral approximation of hyperbolic systems of equations is proposed. It is suggested to collocate the equations, not only at the inner grid points, but also at the boundary points and use the boundary conditions as penalty terms. In the pseudo-spectral Legrendre method with the new boundary treatment, a stability analysis for the case of a constant coefficient hyperbolic system is presented and error estimates are derived.

  3. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  4. Petascale turbulence simulation using a highly parallel fast multipole method on GPUs

    NASA Astrophysics Data System (ADS)

    Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji

    2013-03-01

    This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.

  5. SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models

    NASA Astrophysics Data System (ADS)

    Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.

    2013-12-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.

  6. Highly efficient implementation of pseudospectral time-dependent density-functional theory for the calculation of excitation energies of large molecules.

    PubMed

    Cao, Yixiang; Hughes, Thomas; Giesen, Dave; Halls, Mathew D; Goldberg, Alexander; Vadicherla, Tati Reddy; Sastry, Madhavi; Patel, Bhargav; Sherman, Woody; Weisman, Andrew L; Friesner, Richard A

    2016-06-15

    We have developed and implemented pseudospectral time-dependent density-functional theory (TDDFT) in the quantum mechanics package Jaguar to calculate restricted singlet and restricted triplet, as well as unrestricted excitation energies with either full linear response (FLR) or the Tamm-Dancoff approximation (TDA) with the pseudospectral length scales, pseudospectral atomic corrections, and pseudospectral multigrid strategy included in the implementations to improve the chemical accuracy and to speed the pseudospectral calculations. The calculations based on pseudospectral time-dependent density-functional theory with full linear response (PS-FLR-TDDFT) and within the Tamm-Dancoff approximation (PS-TDA-TDDFT) for G2 set molecules using B3LYP/6-31G*(*) show mean and maximum absolute deviations of 0.0015 eV and 0.0081 eV, 0.0007 eV and 0.0064 eV, 0.0004 eV and 0.0022 eV for restricted singlet excitation energies, restricted triplet excitation energies, and unrestricted excitation energies, respectively; compared with the results calculated from the conventional spectral method. The application of PS-FLR-TDDFT to OLED molecules and organic dyes, as well as the comparisons for results calculated from PS-FLR-TDDFT and best estimations demonstrate that the accuracy of both PS-FLR-TDDFT and PS-TDA-TDDFT. Calculations for a set of medium-sized molecules, including Cn fullerenes and nanotubes, using the B3LYP functional and 6-31G(**) basis set show PS-TDA-TDDFT provides 19- to 34-fold speedups for Cn fullerenes with 450-1470 basis functions, 11- to 32-fold speedups for nanotubes with 660-3180 basis functions, and 9- to 16-fold speedups for organic molecules with 540-1340 basis functions compared to fully analytic calculations without sacrificing chemical accuracy. The calculations on a set of larger molecules, including the antibiotic drug Ramoplanin, the 46-residue crambin protein, fullerenes up to C540 and nanotubes up to 14×(6,6), using the B3LYP functional and 6-31G(**) basis set with up to 8100 basis functions show that PS-FLR-TDDFT CPU time scales as N(2.05) with the number of basis functions. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Fractional spectral and pseudo-spectral methods in unbounded domains: Theory and applications

    NASA Astrophysics Data System (ADS)

    Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R.

    2017-06-01

    This paper is intended to provide exponentially accurate Galerkin, Petrov-Galerkin and pseudo-spectral methods for fractional differential equations on a semi-infinite interval. We start our discussion by introducing two new non-classical Lagrange basis functions: NLBFs-1 and NLBFs-2 which are based on the two new families of the associated Laguerre polynomials: GALFs-1 and GALFs-2 obtained recently by the authors in [28]. With respect to the NLBFs-1 and NLBFs-2, two new non-classical interpolants based on the associated- Laguerre-Gauss and Laguerre-Gauss-Radau points are introduced and then fractional (pseudo-spectral) differentiation (and integration) matrices are derived. Convergence and stability of the new interpolants are proved in detail. Several numerical examples are considered to demonstrate the validity and applicability of the basis functions to approximate fractional derivatives (and integrals) of some functions. Moreover, the pseudo-spectral, Galerkin and Petrov-Galerkin methods are successfully applied to solve some physical ordinary differential equations of either fractional orders or integer ones. Some useful comments from the numerical point of view on Galerkin and Petrov-Galerkin methods are listed at the end.

  8. Pseudospectral collocation methods for fourth order differential equations

    NASA Technical Reports Server (NTRS)

    Malek, Alaeddin; Phillips, Timothy N.

    1994-01-01

    Collocation schemes are presented for solving linear fourth order differential equations in one and two dimensions. The variational formulation of the model fourth order problem is discretized by approximating the integrals by a Gaussian quadrature rule generalized to include the values of the derivative of the integrand at the boundary points. Collocation schemes are derived which are equivalent to this discrete variational problem. An efficient preconditioner based on a low-order finite difference approximation to the same differential operator is presented. The corresponding multidomain problem is also considered and interface conditions are derived. Pseudospectral approximations which are C1 continuous at the interfaces are used in each subdomain to approximate the solution. The approximations are also shown to be C3 continuous at the interfaces asymptotically. A complete analysis of the collocation scheme for the multidomain problem is provided. The extension of the method to the biharmonic equation in two dimensions is discussed and results are presented for a problem defined in a nonrectangular domain.

  9. On pseudo-spectral time discretizations in summation-by-parts form

    NASA Astrophysics Data System (ADS)

    Ruggiu, Andrea A.; Nordström, Jan

    2018-05-01

    Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.

  10. Hybrid Fourier pseudospectral/discontinuous Galerkin time-domain method for wave propagation

    NASA Astrophysics Data System (ADS)

    Pagán Muñoz, Raúl; Hornikx, Maarten

    2017-11-01

    The Fourier Pseudospectral time-domain (Fourier PSTD) method was shown to be an efficient way of modelling acoustic propagation problems as described by the linearized Euler equations (LEE), but is limited to real-valued frequency independent boundary conditions and predominantly staircase-like boundary shapes. This paper presents a hybrid approach to solve the LEE, coupling Fourier PSTD with a nodal Discontinuous Galerkin (DG) method. DG exhibits almost no restrictions with respect to geometrical complexity or boundary conditions. The aim of this novel method is to allow the computation of complex geometries and to be a step towards the implementation of frequency dependent boundary conditions by using the benefits of DG at the boundaries, while keeping the efficient Fourier PSTD in the bulk of the domain. The hybridization approach is based on conformal meshes to avoid spatial interpolation of the DG solutions when transferring values from DG to Fourier PSTD, while the data transfer from Fourier PSTD to DG is done utilizing spectral interpolation of the Fourier PSTD solutions. The accuracy of the hybrid approach is presented for one- and two-dimensional acoustic problems and the main sources of error are investigated. It is concluded that the hybrid methodology does not introduce significant errors compared to the Fourier PSTD stand-alone solver. An example of a cylinder scattering problem is presented and accurate results have been obtained when using the proposed approach. Finally, no instabilities were found during long-time calculation using the current hybrid methodology on a two-dimensional domain.

  11. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  12. Application of shifted Jacobi pseudospectral method for solving (in)finite-horizon min-max optimal control problems with uncertainty

    NASA Astrophysics Data System (ADS)

    Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.

    2018-03-01

    The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.

  13. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  14. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  15. Applications of the Lattice Boltzmann Method to Complex and Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Luo, Li-Shi; Qi, Dewei; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We briefly review the method of the lattice Boltzmann equation (LBE). We show the three-dimensional LBE simulation results for a non-spherical particle in Couette flow and 16 particles in sedimentation in fluid. We compare the LBE simulation of the three-dimensional homogeneous isotropic turbulence flow in a periodic cubic box of the size 1283 with the pseudo-spectral simulation, and find that the two results agree well with each other but the LBE method is more dissipative than the pseudo-spectral method in small scales, as expected.

  16. Topics in spectral methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1985-01-01

    After detailing the construction of spectral approximations to time-dependent mixed initial boundary value problems, a study is conducted of differential equations of the form 'partial derivative of u/partial derivative of t = Lu + f', where for each t, u(t) belongs to a Hilbert space such that u satisfies homogeneous boundary conditions. For the sake of simplicity, it is assumed that L is an unbounded, time-independent linear operator. Attention is given to Fourier methods of both Galerkin and pseudospectral method types, the Galerkin method, the pseudospectral Chebyshev and Legendre methods, the error equation, hyperbolic partial differentiation equations, and time discretization and iterative methods.

  17. Matrix Pseudospectral Method for (Visco)Elastic Tides Modeling of Planetary Bodies

    NASA Astrophysics Data System (ADS)

    Zabranova, Eliska; Hanyk, Ladidslav; Matyska, Ctirad

    2010-05-01

    We deal with the equations and boundary conditions describing deformation and gravitational potential of prestressed spherically symmetric elastic bodies by decomposing governing equations into a series of boundary value problems (BVP) for ordinary differential equations (ODE) of the second order. In contrast to traditional Runge-Kutta integration techniques, highly accurate pseudospectral schemes are employed to directly discretize the BVP on Chebyshev grids and a set of linear algebraic equations with an almost block diagonal matrix is derived. As a consequence of keeping the governing ODEs of the second order instead of the usual first-order equations, the resulting algebraic system is half-sized but derivatives of the model parameters are required. Moreover, they can be easily evaluated for models, where structural parametres are piecewise polynomially dependent. Both accuracy and efficiency of the method are tested by evaluating the tidal Love numbers for the Earth's model PREM. Finally, we also derive complex Love numbers for models with the Maxwell viscoelastic rheology, where viscosity is a depth-dependent function. The method is applied to evaluation of the tidal Love numbers for models of Mars and Venus. The Love numbers of the two Martian models - the former optimized to cosmochemical data and the latter to the moment of inertia (Sohl and Spohn, 1997) - are h2=0.172 (0.212) and k2=0.093 (0.113). For Venus, the value of k2=0.295 (Konopliv and Yoder, 1996), obtained from the gravity-field analysis, is consistent with the results for our model with the liquid-core radius of 3110 km (Zábranová et al., 2009). Together with rapid evaluation of free oscillation periods by an analogous method, this combined matrix approach could by employed as an efficient numerical tool in structural studies of planetary bodies. REFERENCES Konopliv, A. S. and Yoder, C. F., 1996. Venusian k2 tidal Love number from Magellan and PVO tracking data, Geophys. Res. Lett., 23, 1857-1860. Sohl, F., and Spohn, T., 1997. The interior structure of Mars: Implications from SNC meteorites, J. Geophys. Res., 102, 1613-1635. Zabranova, E., Hanyk L. and Matyska, C.: Matrix Pseudospectral Method for Elastic Tides Modeling. In: Holota P. (Ed.): Mission and Passion: Science. A volume dedicated to Milan Bursa on the occasion of his 80th birthday. Published by the Czech National Committee of Geodesy and Geophysics. Prague, 2009, pp. 243-260.

  18. Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation

    NASA Astrophysics Data System (ADS)

    Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim

    2018-05-01

    A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.

  19. Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems

    PubMed Central

    Saberi Nik, Hassan; Rebelo, Paulo

    2014-01-01

    We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624

  20. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  1. Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-07-01

    This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.

  2. Pseudo-spectral methods applied to gravitational collapse.

    NASA Astrophysics Data System (ADS)

    Bonazzola, S.; Marck, J.-A.

    The authors present codes for solving Newtonian gravitational collapse in spherical coordinates for the spherical, axial and true 3 D cases. The pseudo-spectral techniques are used. All quantities are expanded in Chebychev or Legendre polynomials or Fourier series for the periodic parts. The codes are able to handle in a rigorous way the pseudo-singularities τ = 0 and θ = 0, π. Illustrative results for each of the three cases are given.

  3. Challenges at Petascale for Pseudo-Spectral Methods on Spheres (A Last Hurrah?)

    NASA Technical Reports Server (NTRS)

    Clune, Thomas

    2011-01-01

    Conclusions: a) Proper software abstractions should enable rapid-exploration of platform-specific optimizations/ tradeoffs. b) Pseudo-spectra! methods are marginally viable for at least some classes of petascaie problems. i.e., GPU based machine with good bisection would be best. c) Scalability at exascale is possible, but the necessary resolution will make algorithm prohibitively expensive. Efficient implementations of realistic global transposes are mtricate and tedious in MPI. PS at petascaie requires exploration of a variety of strategies for spreading local and remote communic3tions. PGAS allows far simpler implementation and thus rapid exploration of variants.

  4. Simulation of confined magnetohydrodynamic flows with Dirichlet boundary conditions using a pseudo-spectral method with volume penalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morales, Jorge A.; Leroy, Matthieu; Bos, Wouter J.T.

    A volume penalization approach to simulate magnetohydrodynamic (MHD) flows in confined domains is presented. Here the incompressible visco-resistive MHD equations are solved using parallel pseudo-spectral solvers in Cartesian geometries. The volume penalization technique is an immersed boundary method which is characterized by a high flexibility for the geometry of the considered flow. In the present case, it allows to use other than periodic boundary conditions in a Fourier pseudo-spectral approach. The numerical method is validated and its convergence is assessed for two- and three-dimensional hydrodynamic (HD) and MHD flows, by comparing the numerical results with results from literature and analyticalmore » solutions. The test cases considered are two-dimensional Taylor–Couette flow, the z-pinch configuration, three dimensional Orszag–Tang flow, Ohmic-decay in a periodic cylinder, three-dimensional Taylor–Couette flow with and without axial magnetic field and three-dimensional Hartmann-instabilities in a cylinder with an imposed helical magnetic field. Finally, we present a magnetohydrodynamic flow simulation in toroidal geometry with non-symmetric cross section and imposing a helical magnetic field to illustrate the potential of the method.« less

  5. Multiphoton ionization of many-electron atoms and highly-charged ions in intense laser fields: a relativistic time-dependent density functional theory approach

    NASA Astrophysics Data System (ADS)

    Tumakov, Dmitry A.; Telnov, Dmitry A.; Maltsev, Ilia A.; Plunien, Günter; Shabaev, Vladimir M.

    2017-10-01

    We develop an efficient numerical implementation of the relativistic time-dependent density functional theory (RTDDFT) to study multielectron highly-charged ions subject to intense linearly-polarized laser fields. The interaction with the electromagnetic field is described within the electric dipole approximation. The resulting time-dependent relativistic Kohn-Sham (RKS) equations possess an axial symmetry and are solved accurately and efficiently with the help of the time-dependent generalized pseudospectral method. As a case study, we calculate multiphoton ionization probabilities of the neutral argon atom and argon-like xenon ion. Relativistic effects are assessed by comparison of our present results with existing non-relativistic data.

  6. Optimised collision avoidance for an ultra-close rendezvous with a failed satellite based on the Gauss pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chu, Xiaoyu; Zhang, Jingrui; Lu, Shan; Zhang, Yao; Sun, Yue

    2016-11-01

    This paper presents a trajectory planning algorithm to optimise the collision avoidance of a chasing spacecraft operating in an ultra-close proximity to a failed satellite. The complex configuration and the tumbling motion of the failed satellite are considered. The two-spacecraft rendezvous dynamics are formulated based on the target body frame, and the collision avoidance constraints are detailed, particularly concerning the uncertainties. An optimisation solution of the approaching problem is generated using the Gauss pseudospectral method. A closed-loop control is used to track the optimised trajectory. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms.

  7. Fitted Fourier-pseudospectral methods for solving a delayed reaction-diffusion partial differential equation in biology

    NASA Astrophysics Data System (ADS)

    Adam, A. M. A.; Bashier, E. B. M.; Hashim, M. H. A.; Patidar, K. C.

    2017-07-01

    In this work, we design and analyze a fitted numerical method to solve a reaction-diffusion model with time delay, namely, a delayed version of a population model which is an extension of the logistic growth (LG) equation for a food-limited population proposed by Smith [F.E. Smith, Population dynamics in Daphnia magna and a new model for population growth, Ecology 44 (1963) 651-663]. Seeing that the analytical solution (in closed form) is hard to obtain, we seek for a robust numerical method. The method consists of a Fourier-pseudospectral semi-discretization in space and a fitted operator implicit-explicit scheme in temporal direction. The proposed method is analyzed for convergence and we found that it is unconditionally stable. Illustrative numerical results will be presented at the conference.

  8. An efficient hybrid pseudospectral/finite-difference scheme for solving the TTI pure P-wave equation

    NASA Astrophysics Data System (ADS)

    Zhan, Ge; Pestana, Reynam C.; Stoffa, Paul L.

    2013-04-01

    The pure P-wave equation for modelling and migration in tilted transversely isotropic (TTI) media has attracted more and more attention in imaging seismic data with anisotropy. The desirable feature is that it is absolutely free of shear-wave artefacts and the consequent alleviation of numerical instabilities generally suffered by some systems of coupled equations. However, due to several forward-backward Fourier transforms in wavefield updating at each time step, the computational cost is significant, and thereby hampers its prevalence. We propose to use a hybrid pseudospectral (PS) and finite-difference (FD) scheme to solve the pure P-wave equation. In the hybrid solution, most of the cost-consuming wavenumber terms in the equation are replaced by inexpensive FD operators, which in turn accelerates the computation and reduces the computational cost. To demonstrate the benefit in cost saving of the new scheme, 2D and 3D reverse-time migration (RTM) examples using the hybrid solution to the pure P-wave equation are carried out, and respective runtimes are listed and compared. Numerical results show that the hybrid strategy demands less computation time and is faster than using the PS method alone. Furthermore, this new TTI RTM algorithm with the hybrid method is computationally less expensive than that with the FD solution to conventional TTI coupled equations.

  9. A mixed pseudospectral/finite difference method for a thermally driven fluid in a nonuniform gravitational field

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    1985-01-01

    A numerical study of the steady, axisymmetric flow in a heated, rotating spherical shell is conducted to model the Atmospheric General Circulation Experiment (AGCE) proposed to run aboard a later shuttle mission. The AGCE will consist of concentric rotating spheres confining a dielectric fluid. By imposing a dielectric field across the fluid a radial body force will be created. The numerical solution technique is based on the incompressible Navier-Stokes equations. In the method a pseudospectral technique is based on the incompressible Navier-Stokes equations. In the method a pseudospectral technique is used in the latitudinal direction, and a second-order accurate finite difference scheme discretizes time and radial derivatives. This paper discusses the development and performance of this numerical scheme for the AGCE which has been modelled in the past only by pure FD formulations. In addition, previous models have not investigated the effect of using a dielectric force to simulate terrestrial gravity. The effect of this dielectric force on the flow field is investigated as well as a parameter study of varying rotation rates and boundary temperatures. Among the effects noted are the production of larger velocities and enhanced reversals of radial temperature gradients for a body force generated by the electric field.

  10. Spectral simulations of an axisymmetric force-free pulsar magnetosphere

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhang, Li; Sun, Sineng

    2016-02-01

    A pseudo-spectral method with an absorbing outer boundary is used to solve a set of time-dependent force-free equations. In this method, both electric and magnetic fields are expanded in terms of the vector spherical harmonic (VSH) functions in spherical geometry and the divergence-free state of the magnetic field is enforced analytically by a projection method. Our simulations show that the Deutsch vacuum solution and the Michel monopole solution can be reproduced well by our pseudo-spectral code. Further, the method is used to present a time-dependent simulation of the force-free pulsar magnetosphere for an aligned rotator. The simulations show that the current sheet in the equatorial plane can be resolved well and the spin-down luminosity obtained in the steady state is in good agreement with the value given by Spitkovsky.

  11. Discrete conservation laws and the convergence of long time simulations of the mkdv equation

    NASA Astrophysics Data System (ADS)

    Gorria, C.; Alejo, M. A.; Vega, L.

    2013-02-01

    Pseudospectral collocation methods and finite difference methods have been used for approximating an important family of soliton like solutions of the mKdV equation. These solutions present a structural instability which make difficult to approximate their evolution in long time intervals with enough accuracy. The standard numerical methods do not guarantee the convergence to the proper solution of the initial value problem and often fail by approaching solutions associated to different initial conditions. In this frame the numerical schemes that preserve the discrete invariants related to some conservation laws of this equation produce better results than the methods which only take care of a high consistency order. Pseudospectral spatial discretization appear as the most robust of the numerical methods, but finite difference schemes are useful in order to analyze the rule played by the conservation of the invariants in the convergence.

  12. Time domain simulation of harmonic ultrasound images and beam patterns in 3D using the k-space pseudospectral method.

    PubMed

    Treeby, Bradley E; Tumen, Mustafa; Cox, B T

    2011-01-01

    A k-space pseudospectral model is developed for the fast full-wave simulation of nonlinear ultrasound propagation through heterogeneous media. The model uses a novel equation of state to account for nonlinearity in addition to power law absorption. The spectral calculation of the spatial gradients enables a significant reduction in the number of required grid nodes compared to finite difference methods. The model is parallelized using a graphical processing unit (GPU) which allows the simulation of individual ultrasound scan lines using a 256 x 256 x 128 voxel grid in less than five minutes. Several numerical examples are given, including the simulation of harmonic ultrasound images and beam patterns using a linear phased array transducer.

  13. Pseudospectral Model for Hybrid PIC Hall-effect Thruster Simulation

    DTIC Science & Technology

    2015-07-01

    and Fernandez6 (hybrid- PIC ). This work follows the example of Lam and Fernandez but substitutes a spectral description in the azimuthal direction to...Paper 3. DATES COVERED (From - To) July 2015-July 2015 4. TITLE AND SUBTITLE Pseudospectral model for hybrid PIC Hall-effect thruster simulationect...of a pseudospectral azimuthal-axial hybrid- PIC HET code which is designed to explicitly resolve and filter azimuthal fluctuations in the

  14. Numerical investigation of field enhancement by metal nano-particles using a hybrid FDTD-PSTD algorithm.

    PubMed

    Pernice, W H; Payne, F P; Gallagher, D F

    2007-09-03

    We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.

  15. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; McComb, W. David; Yoffe, Samuel; Berera, Arjun

    2014-11-01

    The pseudospectral method, in conjunction with a new technique for obtaining scaling exponents ζn from the structure functions Sn (r) , is presented as an alternative to the extended self-similarity (ESS) method and the use of generalized structure functions. We propose plotting the ratio | Sn (r) /S3 (r) | against the separation r in accordance with a standard technique for analysing experimental data. This method differs from the ESS technique, which plots the generalized structure functions Gn (r) against G3 (r) , where G3 (r) ~ r . Using our method for the particular case of S2 (r) we obtain the new result that the exponent ζ2 decreases as the Taylor-Reynolds number increases, with ζ2 --> 0 . 679 +/- 0 . 013 as Rλ --> ∞ . This supports the idea of finite-viscosity corrections to the K41 prediction for S2, and is the opposite of the result obtained by ESS. The pseudospectral method permits the forcing to be taken into account exactly through the calculation of the energy input in real space from the work spectrum of the stirring forces. The combination of the viscous and the forcing corrections as calculated by the pseudospectral method is shown to account for the deviation of S3 from Kolmogorov's ``four-fifths''-law at all scales. This work has made use of the resources provided by the UK supercomputing service HECToR, made available through the Edinburgh Compute and Data Facility (ECDF). A. B. is supported by STFC, S. R. Y. and M. F. L. are funded by EPSRC.

  16. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  17. Balancing the Power-to-Load Ratio for a Novel Variable Geometry Wave Energy Converter with Nonideal Power Take-Off in Regular Waves: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D

    This work attempts to balance power absorption against structural loading for a novel variable geometry wave energy converter. The variable geometry consists of four identical flaps that will be opened in ascending order starting with the flap closest to the seafloor and moving to the free surface. The influence of a pitch motion constraint on power absorption when utilizing a nonideal power take-off (PTO) is examined and found to reduce the losses associated with bidirectional energy flow. The power-to-load ratio is evaluated using pseudo-spectral control to determine the optimum PTO torque based on a multiterm objective function. The pseudo-spectral optimalmore » control problem is extended to include load metrics in the objective function, which may now consist of competing terms. Separate penalty weights are attached to the surge-foundation force and PTO control torque to tune the optimizer performance to emphasize either power absorption or load shedding. PTO efficiency is not included in the objective function, but the penalty weights are utilized to limit the force and torque amplitudes, thereby reducing losses associated with bidirectional energy flow. Results from pseudo-spectral control demonstrate that shedding a portion of the available wave energy can provide greater reductions in structural loads and reactive power.« less

  18. THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)

    EPA Science Inventory

    A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...

  19. Solution of the one-dimensional consolidation theory equation with a pseudospectral method

    USGS Publications Warehouse

    Sepulveda, N.; ,

    1991-01-01

    The one-dimensional consolidation theory equation is solved for an aquifer system using a pseudospectral method. The spatial derivatives are computed using Fast Fourier Transforms and the time derivative is solved using a fourth-order Runge-Kutta scheme. The computer model calculates compaction based on the void ratio changes accumulated during the simulated periods of time. Compactions and expansions resulting from groundwater withdrawals and recharges are simulated for two observation wells in Santa Clara Valley and two in San Joaquin Valley, California. Field data previously published are used to obtain mean values for the soil grain density and the compression index and to generate depth-dependent profiles for hydraulic conductivity and initial void ratio. The water-level plots for the wells studied were digitized and used to obtain the time dependent profiles of effective stress.

  20. Pseudospectral calculation of helium wave functions, expectation values, and oscillator strength

    NASA Astrophysics Data System (ADS)

    Grabowski, Paul E.; Chernoff, David F.

    2011-10-01

    We show that the pseudospectral method is a powerful tool for finding precise solutions of Schrödinger’s equation for two-electron atoms with general angular momentum. Realizing the method’s full promise for atomic calculations requires special handling of singularities due to two-particle Coulomb interactions. We give a prescription for choosing coordinates and subdomains whose efficacy we illustrate by solving several challenging problems. One test centers on the determination of the nonrelativistic electric dipole oscillator strength for the helium 11S→21P transition. The result achieved, 0.27616499(27), is comparable to the best in the literature. The formally equivalent length, velocity, and acceleration expressions for the oscillator strength all yield roughly the same accuracy. We also calculate a diverse set of helium ground-state expectation values, reaching near state-of-the-art accuracy without the necessity of implementing any special-purpose numerics. These successes imply that general matrix elements are directly and reliably calculable with pseudospectral methods. A striking result is that all the relevant quantities tested in this paper—energy eigenvalues, S-state expectation values and a bound-bound dipole transition between the lowest energy S and P states—converge exponentially with increasing resolution and at roughly the same rate. Each individual calculation samples and weights the configuration space wave function uniquely but all behave in a qualitatively similar manner. These results suggest that the method has great promise for similarly accurate treatment of few-particle systems.

  1. A linear shock cell model for non-circular jets using conformal mapping with a pseudo-spectral hybrid scheme

    NASA Technical Reports Server (NTRS)

    Bhat, Thonse R. S.; Baty, Roy S.; Morris, Philip J.

    1990-01-01

    The shock structure in non-circular supersonic jets is predicted using a linear model. This model includes the effects of the finite thickness of the mixing layer and the turbulence in the jet shear layer. A numerical solution is obtained using a conformal mapping grid generation scheme with a hybrid pseudo-spectral discretization method. The uniform pressure perturbation at the jet exit is approximated by a Fourier-Mathieu series. The pressure at downstream locations is obtained from an eigenfunction expansion that is matched to the pressure perturbation at the jet exit. Results are presented for a circular jet and for an elliptic jet of aspect ratio 2.0. Comparisons are made with experimental data.

  2. Ab initio quantum chemical calculation of electron transfer matrix elements for large molecules

    NASA Astrophysics Data System (ADS)

    Zhang, Linda Yu; Friesner, Richard A.; Murphy, Robert B.

    1997-07-01

    Using a diabatic state formalism and pseudospectral numerical methods, we have developed an efficient ab initio quantum chemical approach to the calculation of electron transfer matrix elements for large molecules. The theory is developed at the Hartree-Fock level and validated by comparison with results in the literature for small systems. As an example of the power of the method, we calculate the electronic coupling between two bacteriochlorophyll molecules in various intermolecular geometries. Only a single self-consistent field (SCF) calculation on each of the monomers is needed to generate coupling matrix elements for all of the molecular pairs. The largest calculations performed, utilizing 1778 basis functions, required ˜14 h on an IBM 390 workstation. This is considerably less cpu time than would be necessitated with a supermolecule adiabatic state calculation and a conventional electronic structure code.

  3. A shifted Jacobi collocation algorithm for wave type equations with non-local conservation conditions

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.

    2014-09-01

    In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach

  4. Model-free simulations of turbulent reactive flows

    NASA Technical Reports Server (NTRS)

    Givi, Peyman

    1989-01-01

    The current computational methods for solving transport equations of turbulent reacting single-phase flows are critically reviewed, with primary attention given to those methods that lead to model-free simulations. In particular, consideration is given to direct numerical simulations using spectral (Galerkin) and pseudospectral (collocation) methods, spectral element methods, and Lagrangian methods. The discussion also covers large eddy simulations and turbulence modeling.

  5. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  6. The extended Fourier pseudospectral time-domain method for atmospheric sound propagation.

    PubMed

    Hornikx, Maarten; Waxler, Roger; Forssén, Jens

    2010-10-01

    An extended Fourier pseudospectral time-domain (PSTD) method is presented to model atmospheric sound propagation by solving the linearized Euler equations. In this method, evaluation of spatial derivatives is based on an eigenfunction expansion. Evaluation on a spatial grid requires only two spatial points per wavelength. Time iteration is done using a low-storage optimized six-stage Runge-Kutta method. This method is applied to two-dimensional non-moving media models, one with screens and one for an urban canyon, with generally high accuracy in both amplitude and phase. For a moving atmosphere, accurate results have been obtained in models with both a uniform and a logarithmic wind velocity profile over a rigid ground surface and in the presence of a screen. The method has also been validated for three-dimensional sound propagation over a screen. For that application, the developed method is in the order of 100 times faster than the second-order-accurate FDTD solution to the linearized Euler equations. The method is found to be well suited for atmospheric sound propagation simulations where effects of complex meteorology and straight rigid boundary surfaces are to be investigated.

  7. Trajectory optimization for lunar soft landing with complex constraints

    NASA Astrophysics Data System (ADS)

    Chu, Huiping; Ma, Lin; Wang, Kexin; Shao, Zhijiang; Song, Zhengyu

    2017-11-01

    A unified trajectory optimization framework with initialization strategies is proposed in this paper for lunar soft landing for various missions with specific requirements. Two main missions of interest are Apollo-like Landing from low lunar orbit and Vertical Takeoff Vertical Landing (a promising mobility method) on the lunar surface. The trajectory optimization is characterized by difficulties arising from discontinuous thrust, multi-phase connections, jump of attitude angle, and obstacles avoidance. Here R-function is applied to deal with the discontinuities of thrust, checkpoint constraints are introduced to connect multiple landing phases, attitude angular rate is designed to get rid of radical changes, and safeguards are imposed to avoid collision with obstacles. The resulting dynamic problems are generally with complex constraints. The unified framework based on Gauss Pseudospectral Method (GPM) and Nonlinear Programming (NLP) solver are designed to solve the problems efficiently. Advanced initialization strategies are developed to enhance both the convergence and computation efficiency. Numerical results demonstrate the adaptability of the framework for various landing missions, and the performance of successful solution of difficult dynamic problems.

  8. A new method of imposing boundary conditions for hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Funaro, D.; ative.

    1987-01-01

    A new method to impose boundary conditions for pseudospectral approximations to hyperbolic equations is suggested. This method involves the collocation of the equation at the boundary nodes as well as satisfying boundary conditions. Stability and convergence results are proven for the Chebyshev approximation of linear scalar hyperbolic equations. The eigenvalues of this method applied to parabolic equations are shown to be real and negative.

  9. An adjoint-based framework for maximizing mixing in binary fluids

    NASA Astrophysics Data System (ADS)

    Eggl, Maximilian; Schmid, Peter

    2017-11-01

    Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.

  10. INFFTM: Fast evaluation of 3d Fourier series in MATLAB with an application to quantum vortex reconnections

    NASA Astrophysics Data System (ADS)

    Caliari, Marco; Zuccher, Simone

    2017-04-01

    Although Fourier series approximation is ubiquitous in computational physics owing to the Fast Fourier Transform (FFT) algorithm, efficient techniques for the fast evaluation of a three-dimensional truncated Fourier series at a set of arbitrary points are quite rare, especially in MATLAB language. Here we employ the Nonequispaced Fast Fourier Transform (NFFT, by J. Keiner, S. Kunis, and D. Potts), a C library designed for this purpose, and provide a Matlab® and GNU Octave interface that makes NFFT easily available to the Numerical Analysis community. We test the effectiveness of our package in the framework of quantum vortex reconnections, where pseudospectral Fourier methods are commonly used and local high resolution is required in the post-processing stage. We show that the efficient evaluation of a truncated Fourier series at arbitrary points provides excellent results at a computational cost much smaller than carrying out a numerical simulation of the problem on a sufficiently fine regular grid that can reproduce comparable details of the reconnecting vortices.

  11. Point-particle method to compute diffusion-limited cellular uptake.

    PubMed

    Sozza, A; Piazza, F; Cencini, M; De Lillo, F; Boffetta, G

    2018-02-01

    We present an efficient point-particle approach to simulate reaction-diffusion processes of spherical absorbing particles in the diffusion-limited regime, as simple models of cellular uptake. The exact solution for a single absorber is used to calibrate the method, linking the numerical parameters to the physical particle radius and uptake rate. We study the configurations of multiple absorbers of increasing complexity to examine the performance of the method by comparing our simulations with available exact analytical or numerical results. We demonstrate the potential of the method to resolve the complex diffusive interactions, here quantified by the Sherwood number, measuring the uptake rate in terms of that of isolated absorbers. We implement the method in a pseudospectral solver that can be generalized to include fluid motion and fluid-particle interactions. As a test case of the presence of a flow, we consider the uptake rate by a particle in a linear shear flow. Overall, our method represents a powerful and flexible computational tool that can be employed to investigate many complex situations in biology, chemistry, and related sciences.

  12. Error analysis for spectral approximation of the Korteweg-De Vries equation

    NASA Technical Reports Server (NTRS)

    Maday, Y.; recent years.

    1987-01-01

    The conservation and convergence properties of spectral Fourier methods for the numerical approximation of the Korteweg-de Vries equation are analyzed. It is proved that the (aliased) collocation pseudospectral method enjoys the same convergence properties as the spectral Galerkin method, which is less effective from the computational point of view. This result provides a precise mathematical answer to a question raised by several authors in recent years.

  13. A New Finite Difference Q-compensated RTM Algorithm in Tilted Transverse Isotropic (TTI) Media

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Hu, W.; Ning, J.

    2017-12-01

    Attenuating anisotropic geological body is difficult to image with conventional migration methods. In such kind of scenarios, recorded seismic data suffer greatly from both amplitude decay and phase distortion, resulting in degraded resolution, poor illumination and incorrect migration depth in imaging results. To efficiently obtain high quality images, we propose a novel TTI QRTM algorithm based on Generalized Standard Linear Solid model combined with a unique multi-stage optimization technique to simultaneously correct the decayed amplitude and the distorted phase velocity. Numerical tests (shown in the figure) demonstrate that our TTI QRTM algorithm effectively corrects migration depth, significantly improves illumination, and enhances resolution within and below the low Q regions. The result of our new method is very close to the reference RTM image, while QRTM without TTI cannot get a correct image. Compared to the conventional QRTM method based on a pseudo-spectral operator for fractional Laplacian evaluation, our method is more computationally efficient for large scale applications and more suitable for GPU acceleration. With the current multi-stage dispersion optimization scheme, this TTI QRTM method best performs in the frequency range 10-70 Hz, and could be used in a wider frequency range. Furthermore, as this method can also handle frequency dependent Q, it has potential to be applied in imaging deep structures where low Q exists, such as subduction zones, volcanic zones or fault zones with passive source observations.

  14. Pseudospectral modeling and dispersion analysis of Rayleigh waves in viscoelastic media

    USGS Publications Warehouse

    Zhang, K.; Luo, Y.; Xia, J.; Chen, C.

    2011-01-01

    Multichannel Analysis of Surface Waves (MASW) is one of the most widely used techniques in environmental and engineering geophysics to determine shear-wave velocities and dynamic properties, which is based on the elastic layered system theory. Wave propagation in the Earth, however, has been recognized as viscoelastic and the propagation of Rayleigh waves presents substantial differences in viscoelastic media as compared with elastic media. Therefore, it is necessary to carry out numerical simulation and dispersion analysis of Rayleigh waves in viscoelastic media to better understand Rayleigh-wave behaviors in the real world. We apply a pseudospectral method to the calculation of the spatial derivatives using a Chebyshev difference operator in the vertical direction and a Fourier difference operator in the horizontal direction based on the velocity-stress elastodynamic equations and relations of linear viscoelastic solids. This approach stretches the spatial discrete grid to have a minimum grid size near the free surface so that high accuracy and resolution are achieved at the free surface, which allows an effective incorporation of the free surface boundary conditions since the Chebyshev method is nonperiodic. We first use an elastic homogeneous half-space model to demonstrate the accuracy of the pseudospectral method comparing with the analytical solution, and verify the correctness of the numerical modeling results for a viscoelastic half-space comparing the phase velocities of Rayleigh wave between the theoretical values and the dispersive image generated by high-resolution linear Radon transform. We then simulate three types of two-layer models to analyze dispersive-energy characteristics for near-surface applications. Results demonstrate that the phase velocity of Rayleigh waves in viscoelastic media is relatively higher than in elastic media and the fundamental mode increases by 10-16% when the frequency is above 10. Hz due to the velocity dispersion of P and S waves. ?? 2011 Elsevier Ltd.

  15. Jupyter Notebooks for Earth Sciences: An Interactive Training Platform for Seismology

    NASA Astrophysics Data System (ADS)

    Igel, H.; Chow, B.; Donner, S.; Krischer, L.; van Driel, M.; Tape, C.

    2017-12-01

    We have initiated a community platform (http://www.seismo-live.org) where Python-based Jupyter notebooks (https://jupyter.org) can be accessed and run without necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow the combination of markup language, graphics, and equations with interactive, executable Python code examples. Jupyter notebooks are a powerful and easy-to-grasp tool for students to develop entire projects, scientists to collaborate and efficiently interchange evolving workflows, and trainers to develop efficient practical material. Utilizing the tmpnb project (https://github.com/jupyter/tmpnb), we link the power of Jupyter notebooks with an underlying server, such that notebooks can be run from anywhere, even on smart phones. We demonstrate the potential with notebooks for 1) learning the programming language Python, 2) basic signal processing, 3) an introduction to the ObsPy library (https://obspy.org) for seismology, 4) seismic noise analysis, 5) an entire suite of notebooks for computational seismology (the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin methods, Instaseis), 6) rotational seismology, 7) making results in papers fully reproducible, 8) a rate-and-state friction toolkit, 9) glacial seismology. The platform is run as a community project using Github. Submission of complementary Jupyter notebooks is encouraged. Extension in the near future include linear(-ized) and nonlinear inverse problems.

  16. On the boundary treatment in spectral methods for hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Canuto, C.; Quarteroni, A.

    1986-01-01

    Spectral methods were successfully applied to the simulation of slow transients in gas transportation networks. Implicit time advancing techniques are naturally suggested by the nature of the problem. The correct treatment of the boundary conditions are clarified in order to avoid any stability restriction originated by the boundaries. The Beam and Warming and the Lerat schemes are unconditionally linearly stable when used with a Chebyshev pseudospectral method. Engineering accuracy for a gas transportation problem is achieved at Courant numbers up to 100.

  17. On the boundary treatment in spectral methods for hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Canuto, Claudio; Quarteroni, Alfio

    1987-01-01

    Spectral methods were successfully applied to the simulation of slow transients in gas transportation networks. Implicit time advancing techniques are naturally suggested by the nature of the problem. The correct treatment of the boundary conditions is clarified in order to avoid any stability restriction originated by the boundaries. The Beam and Warming and the Lerat schemes are unconditionally linearly stable when used with a Chebyshev pseudospectral method. Engineering accuracy for a gas transportation problem is achieved at Courant numbers up to 100.

  18. Hybrid Solution of Stochastic Optimal Control Problems Using Gauss Pseudospectral Method and Generalized Polynomial Chaos Algorithms

    DTIC Science & Technology

    2012-03-01

    0-486-41183-4. 15. Brown , Robert G. and Patrick Y. C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. Wiley, New York, 1996. ISBN...stability and perfor- mance criteria. In the 1960’s, Kalman introduced the Linear Quadratic Regulator (LQR) method using an integral performance index...feedback of the state variables and was able to apply this method to time-varying and Multi-Input Multi-Output (MIMO) systems. Kalman further showed

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  20. Translation and integration of numerical atomic orbitals in linear molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinäsmäki, Sami, E-mail: sami.heinasmaki@gmail.com

    2014-02-14

    We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.

  1. A mixed pseudospectral/finite difference method for a thermally driven fluid in a nonuniform gravitational field

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    1985-01-01

    A numerical study of the steady, axisymmetric flow in a heated, rotating spherical shell is conducted to model the Atmospheric General Circulation Experiment (AGCE) proposed to run aboard a later Shuttle mission. The AGCE will consist of concentric rotating spheres confining a dielectric fluid. By imposing a dielectric field across the fluid a radial body force will be created. The numerical solution technique is based on the incompressible Navier-Stokes equations. In the method a pseudospectral technique is used in the latitudinal direction, and a second-order accurate finite difference scheme discretizes time and radial derivatives. This paper discusses the development and performance of this numerical scheme for the AGCE which has been modeled in the past only by pure FD formulations. In addition, previous models have not investigated the effect of using a dielectric force to simulate terrestrial gravity. The effect of this dielectric force on the flow field is investigated as well as a parameter study of varying rotation rates and boundary temperatures. Among the effects noted are the production of larger velocities and enhanced reversals of radial temperature gradients for a body force generated by the electric field.

  2. High precision computing with charge domain devices and a pseudo-spectral method therefor

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob (Inventor); Toomarian, Nikzad (Inventor); Fijany, Amir (Inventor); Zak, Michail (Inventor)

    1997-01-01

    The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.

  3. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  4. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  5. Balancing Power Absorption Against Structural Loads With Viscous Drag and Power-Takeoff Efficiency Considerations

    DOE PAGES

    Tom, Nathan; Yu, Yi-Hsiang; Wright, Alan; ...

    2017-11-17

    The focus of this paper is to balance power absorption against structural loading for a novel fixed-bottom oscillating surge wave energy converter in both regular and irregular wave environments. The power-to-load ratio will be evaluated using pseudospectral control (PSC) to determine the optimum power-takeoff (PTO) torque based on a multiterm objective function. This paper extends the pseudospectral optimal control problem to not just maximize the time-averaged absorbed power but also include measures for the surge-foundation force and PTO torque in the optimization. The objective function may now potentially include three competing terms that the optimizer must balance. Separate weighting factorsmore » are attached to the surge-foundation force and PTO control torque that can be used to tune the optimizer performance to emphasize either power absorption or load shedding. To correct the pitch equation of motion, derived from linear hydrodynamic theory, a quadratic-viscous-drag torque has been included in the system dynamics; however, to continue the use of quadratic programming solvers, an iteratively obtained linearized drag coefficient was utilized that provided good accuracy in the predicted pitch motion. Furthermore, the analysis considers the use of a nonideal PTO unit to more accurately evaluate controller performance. The PTO efficiency is not directly included in the objective function but rather the weighting factors are utilized to limit the PTO torque amplitudes, thereby reducing the losses resulting from the bidirectional energy flow through a nonideal PTO. Results from PSC show that shedding a portion of the available wave energy can lead to greater reductions in structural loads, peak-to-average power ratio, and reactive power requirement.« less

  6. Intercomparison of general circulation models for hot extrasolar planets

    NASA Astrophysics Data System (ADS)

    Polichtchouk, I.; Cho, J. Y.-K.; Watkins, C.; Thrastarson, H. Th.; Umurhan, O. M.; de la Torre Juárez, M.

    2014-02-01

    We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ different numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: (1) steady-state, (2) nonlinearly evolving baroclinic wave, and (3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should-except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably different. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively different behavior-although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.

  7. A first-order k-space model for elastic wave propagation in heterogeneous media.

    PubMed

    Firouzi, K; Cox, B T; Treeby, B E; Saffari, N

    2012-09-01

    A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.

  8. Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation

    NASA Astrophysics Data System (ADS)

    Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng

    2014-05-01

    Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.

  9. Two-Stage Path Planning Approach for Designing Multiple Spacecraft Reconfiguration Maneuvers

    NASA Technical Reports Server (NTRS)

    Aoude, Georges S.; How, Jonathan P.; Garcia, Ian M.

    2007-01-01

    The paper presents a two-stage approach for designing optimal reconfiguration maneuvers for multiple spacecraft. These maneuvers involve well-coordinated and highly-coupled motions of the entire fleet of spacecraft while satisfying an arbitrary number of constraints. This problem is particularly difficult because of the nonlinearity of the attitude dynamics, the non-convexity of some of the constraints, and the coupling between the positions and attitudes of all spacecraft. As a result, the trajectory design must be solved as a single 6N DOF problem instead of N separate 6 DOF problems. The first stage of the solution approach quickly provides a feasible initial solution by solving a simplified version without differential constraints using a bi-directional Rapidly-exploring Random Tree (RRT) planner. A transition algorithm then augments this guess with feasible dynamics that are propagated from the beginning to the end of the trajectory. The resulting output is a feasible initial guess to the complete optimal control problem that is discretized in the second stage using a Gauss pseudospectral method (GPM) and solved using an off-the-shelf nonlinear solver. This paper also places emphasis on the importance of the initialization step in pseudospectral methods in order to decrease their computation times and enable the solution of a more complex class of problems. Several examples are presented and discussed.

  10. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  11. Efficient solution of the Wigner-Liouville equation using a spectral decomposition of the force field

    NASA Astrophysics Data System (ADS)

    Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim

    2017-12-01

    The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.

  12. Rapid design and optimization of low-thrust rendezvous/interception trajectory for asteroid deflection missions

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Zhu, Yongsheng; Wang, Yukai

    2014-02-01

    Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.

  13. Analysis of the spectral vanishing viscosity method for periodic conservation laws

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Tadmor, Eitan

    1988-01-01

    The convergence of the spectral vanishing method for both the spectral and pseudospectral discretizations of the inviscid Burgers' equation is analyzed. It is proven that this kind of vanishing viscosity is responsible for a spectral decay of those Fourier coefficients located toward the end of the computed spectrum; consequently, the discretization error is shown to be spectrally small independent of whether the underlying solution is smooth or not. This in turn implies that the numerical solution remains uniformly bounded and convergence follows by compensated compactness arguments.

  14. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  15. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  16. Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem

    NASA Astrophysics Data System (ADS)

    Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.

    2017-05-01

    In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.

  17. Balancing Power Absorption and Structural Loading for a Novel Fixed-Bottom Wave Energy Converter with Nonideal Power Take-Off in Regular Waves: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M; Yu, Yi-Hsiang; Wright, Alan D

    In this work, the net power delivered to the grid from a nonideal power take-off (PTO) is introduced followed by a review of the pseudo-spectral control theory. A power-to-load ratio, used to evaluate the pseudo-spectral controller performance, is discussed, and the results obtained from optimizing a multiterm objective function are compared against results obtained from maximizing the net output power to the grid. Simulation results are then presented for four different oscillating wave energy converter geometries to highlight the potential of combing both geometry and PTO control to maximize power while minimizing loads.

  18. Numerical study of the small scale structures in Boussinesq convection

    NASA Technical Reports Server (NTRS)

    Weinan, E.; Shu, Chi-Wang

    1992-01-01

    Two-dimensional Boussinesq convection is studied numerically using two different methods: a filtered pseudospectral method and a high order accurate Essentially Nonoscillatory (ENO) scheme. The issue whether finite time singularity occurs for initially smooth flows is investigated. The numerical results suggest that the collapse of the bubble cap is unlikely to occur in resolved calculations. The strain rate corresponding to the intensification of the density gradient across the front saturates at the bubble cap. We also found that the cascade of energy to small scales is dominated by the formulation of thin and sharp fronts across which density jumps.

  19. Direct numerical simulations of a reacting turbulent mixing layer by a pseudospectral-spectral element method

    NASA Technical Reports Server (NTRS)

    Mcmurtry, Patrick A.; Givi, Peyman

    1992-01-01

    An account is given of the implementation of the spectral-element technique for simulating a chemically reacting, spatially developing turbulent mixing layer. Attention is given to experimental and numerical studies that have investigated the development, evolution, and mixing characteristics of shear flows. A mathematical formulation is presented of the physical configuration of the spatially developing reacting mixing layer, in conjunction with a detailed representation of the spectral-element method's application to the numerical simulation of mixing layers. Results from 2D and 3D calculations of chemically reacting mixing layers are given.

  20. Simulation of charge transport in micro and nanoscale FETs with elements having different dielectric properties

    NASA Astrophysics Data System (ADS)

    Blokhin, A. M.; Kruglova, E. A.; Semisalov, B. V.

    2018-03-01

    The hydrodynamical model is used for description of the process of charge transport in semiconductors with a high rate of reliability. It is a set of nonlinear partial differential equations with small parameters and specific conditions at the boundaries of field effect transistors (FETs), which essentially complicates the process of finding its stationary solutions. To overcome these difficulties in the case of FETs with elements having different dielectric properties, a fast pseudospectral method has been developed. This method was used for advanced numerical simulation of charge transport in DG-MOSFET.

  1. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  2. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  3. Statistical properties and correlation functions for drift waves

    NASA Technical Reports Server (NTRS)

    Horton, W.

    1986-01-01

    The dissipative one-field drift wave equation is solved using the pseudospectral method to generate steady-state fluctuations. The fluctuations are analyzed in terms of space-time correlation functions and modal probability distributions. Nearly Gaussian statistics and exponential decay of the two-time correlation functions occur in the presence of electron dissipation, while in the absence of electron dissipation long-lived vortical structures occur. Formulas from renormalized, Markovianized statistical turbulence theory are given in a local approximation to interpret the dissipative turbulence.

  4. Modelization of highly nonlinear waves in coastal regions

    NASA Astrophysics Data System (ADS)

    Gouin, Maïté; Ducrozet, Guillaume; Ferrant, Pierre

    2015-04-01

    The proposed work deals with the development of a highly non-linear model for water wave propagation in coastal regions. The accurate modelization of surface gravity waves is of major interest in ocean engineering, especially in the field of marine renewable energy. These marine structures are intended to be settled in coastal regions where the effect of variable bathymetry may be significant on local wave conditions. This study presents a numerical model for the wave propagation with complex bathymetry. It is based on High-Order Spectral (HOS) method, initially limited to the propagation of non-linear wave fields over flat bottom. Such a model has been developed and validated at the LHEEA Lab. (Ecole Centrale Nantes) over the past few years and the current developments will enlarge its application range. This new numerical model will keep the interesting numerical properties of the original pseudo-spectral approach (convergence, efficiency with the use of FFTs, …) and enable the possibility to propagate highly non-linear wave fields over long time and large distance. Different validations will be provided in addition to the presentation of the method. At first, Bragg reflection will be studied with the proposed approach. If the Bragg condition is satisfied, the reflected wave generated by a sinusoidal bottom patch should be amplified as a result of resonant quadratic interactions between incident wave and bottom. Comparisons will be provided with experiments and reference solutions. Then, the method will be used to consider the transformation of a non-linear monochromatic wave as it propagates up and over a submerged bar. As the waves travel up the front slope of the bar, it steepens and high harmonics are generated due to non-linear interactions. Comparisons with experimental data will be provided. The different test cases will assess the accuracy and efficiency of the method proposed.

  5. Optimal Lorentz-augmented spacecraft formation flying in elliptic orbits

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2015-06-01

    An electrostatically charged spacecraft accelerates as it moves through the Earth's magnetic field due to the induced Lorentz force, providing a new means of propellantless electromagnetic propulsion for orbital maneuvers. The feasibility of Lorentz-augmented spacecraft formation flying in elliptic orbits is investigated in this paper. Assuming the Earth's magnetic field as a tilted dipole corotating with Earth, a nonlinear dynamical model that characterizes the orbital motion of Lorentz spacecraft in the vicinity of arbitrary elliptic orbits is developed. To establish a predetermined formation configuration at given terminal time, pseudospectral method is used to solve the optimal open-loop trajectories of hybrid control inputs consisted of Lorentz acceleration and thruster-generated control acceleration. A nontilted dipole model is also introduced to analyze the effect of dipole tilt angle via comparisons with the tilted one. Meanwhile, to guarantee finite-time convergence and system robustness against external perturbations, a continuous fast nonsingular terminal sliding mode controller is designed and the closed-loop system stability is proved by Lyapunov theory. Numerical simulations substantiate the validity of proposed open-loop and closed-loop control schemes, and the results indicate that an almost propellantless formation establishment can be achieved by choosing appropriate objective function in the pseudospectral method. Furthermore, compared to the nonsingular terminal sliding mode controller, the closed-loop controller presents superior convergence rate with only a bit more control effort. And the proposed controller can be applied in other Lorentz-augmented relative orbital control problems.

  6. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    PubMed Central

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676

  7. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    PubMed

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan; Yu, Yi-Hsiang; Wright, Alan

    The focus of this paper is to balance power absorption against structural loading for a novel fixed-bottom oscillating surge wave energy converter in both regular and irregular wave environments. The power-to-load ratio will be evaluated using pseudospectral control (PSC) to determine the optimum power-takeoff (PTO) torque based on a multiterm objective function. This paper extends the pseudospectral optimal control problem to not just maximize the time-averaged absorbed power but also include measures for the surge-foundation force and PTO torque in the optimization. The objective function may now potentially include three competing terms that the optimizer must balance. Separate weighting factorsmore » are attached to the surge-foundation force and PTO control torque that can be used to tune the optimizer performance to emphasize either power absorption or load shedding. To correct the pitch equation of motion, derived from linear hydrodynamic theory, a quadratic-viscous-drag torque has been included in the system dynamics; however, to continue the use of quadratic programming solvers, an iteratively obtained linearized drag coefficient was utilized that provided good accuracy in the predicted pitch motion. Furthermore, the analysis considers the use of a nonideal PTO unit to more accurately evaluate controller performance. The PTO efficiency is not directly included in the objective function but rather the weighting factors are utilized to limit the PTO torque amplitudes, thereby reducing the losses resulting from the bidirectional energy flow through a nonideal PTO. Results from PSC show that shedding a portion of the available wave energy can lead to greater reductions in structural loads, peak-to-average power ratio, and reactive power requirement.« less

  9. Black hole evolution by spectral methods

    NASA Astrophysics Data System (ADS)

    Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.

    2000-10-01

    Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.

  10. AxiSEM3D: a new fast method for global wave propagation in 3-D Earth models with undulating discontinuities

    NASA Astrophysics Data System (ADS)

    Leng, K.; Nissen-Meyer, T.; van Driel, M.; Al-Attar, D.

    2016-12-01

    We present a new, computationally efficient numerical method to simulate global seismic wave propagation in realistic 3-D Earth models with laterally heterogeneous media and finite boundary perturbations. Our method is a hybrid of pseudo-spectral and spectral element methods (SEM). We characterize the azimuthal dependence of 3-D wavefields in terms of Fourier series, such that the 3-D equations of motion reduce to an algebraic system of coupled 2-D meridional equations, which can be solved by a 2-D spectral element method (based on www.axisem.info). Computational efficiency of our method stems from lateral smoothness of global Earth models (with respect to wavelength) as well as axial singularity of seismic point sources, which jointly confine the Fourier modes of wavefields to a few lower orders. All boundary perturbations that violate geometric spherical symmetry, including Earth's ellipticity, topography and bathymetry, undulations of internal discontinuities such as Moho and CMB, are uniformly considered by means of a Particle Relabeling Transformation.The MPI-based high performance C++ code AxiSEM3D, is now available for forward simulations upon 3-D Earth models with fluid outer core, ellipticity, and both mantle and crustal structures. We show novel benchmarks for global wave solutions in 3-D mantle structures between our method and an independent, fully discretized 3-D SEM with remarkable agreement. Performance comparisons are carried out on three state-of-the-art tomography models, with seismic period going down to 5s. It is shown that our method runs up to two orders of magnitude faster than the 3-D SEM for such settings, and such computational advantage scales favourably with seismic frequency. By examining wavefields passing through hypothetical Gaussian plumes of varying sharpness, we identify in model-wavelength space the limits where our method may lose its advantage.

  11. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  12. Pseudo-spectral control of a novel oscillating surge wave energy converter in regular waves for power optimization including load reduction

    DOE PAGES

    Tom, Nathan M.; Yu, Yi -Hsiang; Wright, Alan D.; ...

    2017-04-18

    The aim of this study is to describe a procedure to maximize the power-to-load ratio of a novel wave energy converter (WEC) that combines an oscillating surge wave energy converter with variable structural components. The control of the power-take-off torque will be on a wave-to-wave timescale, whereas the structure will be controlled statically such that the geometry remains the same throughout the wave period. Linear hydrodynamic theory is used to calculate the upper and lower bounds for the time-averaged absorbed power and surge foundation loads while assuming that the WEC motion remains sinusoidal. Previous work using pseudo-spectral techniques to solvemore » the optimal control problem focused solely on maximizing absorbed energy. This work extends the optimal control problem to include a measure of the surge foundation force in the optimization. The objective function includes two competing terms that force the optimizer to maximize power capture while minimizing structural loads. A penalty weight was included with the surge foundation force that allows control of the optimizer performance based on whether emphasis should be placed on power absorption or load shedding. Results from pseudo-spectral optimal control indicate that a unit reduction in time-averaged power can be accompanied by a greater reduction in surge-foundation force.« less

  13. On the computation of steady Hopper flows. II: von Mises materials in various geometries

    NASA Astrophysics Data System (ADS)

    Gremaud, Pierre A.; Matthews, John V.; O'Malley, Meghan

    2004-11-01

    Similarity solutions are constructed for the flow of granular materials through hoppers. Unlike previous work, the present approach applies to nonaxisymmetric containers. The model involves ten unknowns (stresses, velocity, and plasticity function) determined by nine nonlinear first order partial differential equations together with a quadratic algebraic constraint (yield condition). A pseudospectral discretization is applied; the resulting problem is solved with a trust region method. The important role of the hopper geometry on the flow is illustrated by several numerical experiments of industrial relevance.

  14. Pseudo-spectral methodology for a quantitative assessment of the cover of in-stream vegetation in small streams

    NASA Astrophysics Data System (ADS)

    Hershkovitz, Yaron; Anker, Yaakov; Ben-Dor, Eyal; Schwartz, Guy; Gasith, Avital

    2010-05-01

    In-stream vegetation is a key ecosystem component in many fluvial ecosystems, having cascading effects on stream conditions and biotic structure. Traditionally, ground-level surveys (e.g. grid and transect analyses) are commonly used for estimating cover of aquatic macrophytes. Nonetheless, this methodological approach is highly time consuming and usually yields information which is practically limited to habitat and sub-reach scales. In contrast, remote-sensing techniques (e.g. satellite imagery and airborne photography), enable collection of large datasets over section, stream and basin scales, in relatively short time and reasonable cost. However, the commonly used spatial high resolution (1m) is often inadequate for examining aquatic vegetation on habitat or sub-reach scales. We examined the utility of a pseudo-spectral methodology, using RGB digital photography for estimating the cover of in-stream vegetation in a small Mediterranean-climate stream. We compared this methodology with that obtained by traditional ground-level grid methodology and with an airborne hyper-spectral remote sensing survey (AISA-ES). The study was conducted along a 2 km section of an intermittent stream (Taninim stream, Israel). When studied, the stream was dominated by patches of watercress (Nasturtium officinale) and mats of filamentous algae (Cladophora glomerata). The extent of vegetation cover at the habitat and section scales (100 and 104 m, respectively) were estimated by the pseudo-spectral methodology, using an airborne Roli camera with a Phase-One P 45 (39 MP) CCD image acquisition unit. The swaths were taken in elevation of about 460 m having a spatial resolution of about 4 cm (NADIR). For measuring vegetation cover at the section scale (104 m) we also used a 'push-broom' AISA-ES hyper-spectral swath having a sensor configuration of 182 bands (350-2500 nm) at elevation of ca. 1,200 m (i.e. spatial resolution of ca. 1 m). Simultaneously, with every swath we used an Analytical Spectral Device (ASD) to measure hyper-spectral signatures (2150 bands configuration; 350-2500 nm) of selected ground-level targets (located by GPS) of soil, water; vegetation (common reed, watercress, filamentous algae) and standard EVA foam colored sheets (red, green, blue, black and white). Processing and analysis of the data were performed over an ITT ENVI platform. The hyper-spectral image underwent radiometric calibration according to the flight and sensor calibration parameters on CALIGEO platform and the raw DN scale was converted into radiance scale. Ground level visual survey of vegetation cover and height was applied at the habitat scale (100 m) by placing a 1m2 netted grids (10x10cm cells) along 'bank-to-bank' transect (in triplicates). Estimates of plant cover obtained by the pseudo-spectral methodology at the habitat scale were 35-61% for the watercress, 0.4-25% for the filamentous algae and 27-51% for plant-free patches. The respective estimates by ground level visual survey were 26-50, 14-43% and 36-50%. The pseudo-spectral methodology also yielded estimates for the section scale (104 m) of ca. 39% for the watercress, ca. 32% for the filamentous algae and 6% for plant-free patches. The respective estimates obtained by hyper-spectral swath were 38, 26 and 8%. Validation against ground-level measurements proved that pseudo-spectral methodology gives reasonably good estimates of in-stream plant cover. Therefore, this methodology can serve as a substitute for ground level estimates at small stream scales and for the low resolution hyper-spectral methodology at larger scales.

  15. Safe-trajectory optimization and tracking control in ultra-close proximity to a failed satellite

    NASA Astrophysics Data System (ADS)

    Zhang, Jingrui; Chu, Xiaoyu; Zhang, Yao; Hu, Quan; Zhai, Guang; Li, Yanyan

    2018-03-01

    This paper presents a trajectory-optimization method for a chaser spacecraft operating in ultra-close proximity to a failed satellite. Based on the combination of active and passive trajectory protection, the constraints in the optimization framework are formulated for collision avoidance and successful docking in the presence of any thruster failure. The constraints are then handled by an adaptive Gauss pseudospectral method, in which the dynamic residuals are used as the metric to determine the distribution of collocation points. A finite-time feedback control is further employed in tracking the optimized trajectory. In particular, the stability and convergence of the controller are proved. Numerical results are given to demonstrate the effectiveness of the proposed methods.

  16. Enhanced Representation of Turbulent Flow Phenomena in Large-Eddy Simulations of the Atmospheric Boundary Layer using Grid Refinement with Pseudo-Spectral Numerics

    NASA Astrophysics Data System (ADS)

    Torkelson, G. Q.; Stoll, R., II

    2017-12-01

    Large Eddy Simulation (LES) is a tool commonly used to study the turbulent transport of momentum, heat, and moisture in the Atmospheric Boundary Layer (ABL). For a wide range of ABL LES applications, representing the full range of turbulent length scales in the flow field is a challenge. This is an acute problem in regions of the ABL with strong velocity or scalar gradients, which are typically poorly resolved by standard computational grids (e.g., near the ground surface, in the entrainment zone). Most efforts to address this problem have focused on advanced sub-grid scale (SGS) turbulence model development, or on the use of massive computational resources. While some work exists using embedded meshes, very little has been done on the use of grid refinement. Here, we explore the benefits of grid refinement in a pseudo-spectral LES numerical code. The code utilizes both uniform refinement of the grid in horizontal directions, and stretching of the grid in the vertical direction. Combining the two techniques allows us to refine areas of the flow while maintaining an acceptable grid aspect ratio. In tests that used only refinement of the vertical grid spacing, large grid aspect ratios were found to cause a significant unphysical spike in the stream-wise velocity variance near the ground surface. This was especially problematic in simulations of stably-stratified ABL flows. The use of advanced SGS models was not sufficient to alleviate this issue. The new refinement technique is evaluated using a series of idealized simulation test cases of neutrally and stably stratified ABLs. These test cases illustrate the ability of grid refinement to increase computational efficiency without loss in the representation of statistical features of the flow field.

  17. Numerical Simulation of Strong Ground Motion at Mexico City:A Hybrid Approach for Efficient Evaluation of Site Amplification and Path Effects for Different Types of Earthquakes

    NASA Astrophysics Data System (ADS)

    Cruz, H.; Furumura, T.; Chavez-Garcia, F. J.

    2002-12-01

    The estimation of scenarios of the strong ground motions caused by future great earthquakes is an important problem in strong motion seismology. This was pointed out by the great 1985 Michoacan earthquake, which caused a great damage in Mexico City, 300 km away from the epicenter. Since the seismic wavefield is characterized by the source, path and site effects, the pattern of strong motion damage from different types of earthquakes should differ significantly. In this study, the scenarios for intermediate-depth normal-faulting, shallow-interplate thrust faulting, and crustal earthquakes have been estimated using a hybrid simulation technique. The character of the seismic wavefield propagating from the source to Mexico City for each earthquake was first calculated using the pseudospectral method for 2D SH waves. The site amplifications in the shallow structure of Mexico City are then calculated using the multiple SH wave reverberation theory. The scenarios of maximum ground motion for both inslab and interplate earthquakes obtained by the simulation show a good agreement with the observations. This indicates the effectiveness of the hybrid simulation approach to investigate the strong motion damage for future earthquakes.

  18. Fully coupled six-dimensional calculations of the water dimer vibration-rotation-tunneling states with split Wigner pseudospectral approach. II. Improvements and tests of additional potentials

    NASA Astrophysics Data System (ADS)

    Fellers, R. S.; Braly, L. B.; Saykally, R. J.; Leforestier, C.

    1999-04-01

    The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data.

  19. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  20. The development of efficient numerical time-domain modeling methods for geophysical wave propagation

    NASA Astrophysics Data System (ADS)

    Zhu, Lieyuan

    This Ph.D. dissertation focuses on the numerical simulation of geophysical wave propagation in the time domain including elastic waves in solid media, the acoustic waves in fluid media, and the electromagnetic waves in dielectric media. This thesis shows that a linear system model can describe accurately the physical processes of those geophysical waves' propagation and can be used as a sound basis for modeling geophysical wave propagation phenomena. The generalized stability condition for numerical modeling of wave propagation is therefore discussed in the context of linear system theory. The efficiency of a series of different numerical algorithms in the time-domain for modeling geophysical wave propagation are discussed and compared. These algorithms include the finite-difference time-domain method, pseudospectral time domain method, alternating directional implicit (ADI) finite-difference time domain method. The advantages and disadvantages of these numerical methods are discussed and the specific stability condition for each modeling scheme is carefully derived in the context of the linear system theory. Based on the review and discussion of these existing approaches, the split step, ADI pseudospectral time domain (SS-ADI-PSTD) method is developed and tested for several cases. Moreover, the state-of-the-art stretched-coordinate perfect matched layer (SCPML) has also been implemented in SS-ADI-PSTD algorithm as the absorbing boundary condition for truncating the computational domain and absorbing the artificial reflection from the domain boundaries. After algorithmic development, a few case studies serve as the real-world examples to verify the capacities of the numerical algorithms and understand the capabilities and limitations of geophysical methods for detection of subsurface contamination. The first case is a study using ground penetrating radar (GPR) amplitude variation with offset (AVO) for subsurface non-aqueous-liquid (NAPL) contamination. The numerical AVO study reveals that the normalized residual polarization (NRP) variation with offset does not respond to subsurface NAPL existence when the offset is close to or larger than its critical value (which corresponds to critical incident angle) because the air and head waves dominate the recorded wave field and severely interfere with reflected waves in the TEz wave field. Thus it can be concluded that the NRP AVO/GPR method is invalid when source-receiver angle offset is close to or greater than its critical value due to incomplete and severely distorted reflection information. In other words, AVO is not a promising technique for detection of the subsurface NAPL, as claimed by some researchers. In addition, the robustness of the newly developed numerical algorithms is also verified by the AVO study for randomly-arranged layered media. Meanwhile, this case study also demonstrates again that the full-wave numerical modeling algorithms are superior to ray tracing method. The second case study focuses on the effect of the existence of a near-surface fault on the vertically incident P- and S- plane waves. The modeling results show that both P-wave vertical incidence and S-wave vertical incidence cases are qualified fault indicators. For the plane S-wave vertical incidence case, the horizontal location of the upper tip of the fault (the footwall side) can be identified without much effort, because all the recorded parameters on the surface including the maximum velocities and the maximum accelerations, and even their ratios H/V, have shown dramatic changes when crossing the upper tip of the fault. The centers of the transition zone of the all the curves of parameters are almost directly above the fault tip (roughly the horizontal center of the model). Compared with the case of the vertically incident P-wave source, it has been found that the S-wave vertical source is a better indicator for fault location, because the horizontal location of the tip of that fault cannot be clearly identified with the ratio of the horizontal to vertical velocity for the P-wave incident case.

  1. Mass transfer from a sphere in an oscillating flow with zero mean velocity

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.; Lyman, Frederic A.

    1990-01-01

    A pseudospectral numerical method is used for the solution of the Navier-Stokes and mass transport equations for a sphere in a sinusoidally oscillating flow with zero mean velocity. The flow is assumed laminar and axisymmetric about the sphere's polar axis. Oscillating flow results were obtained for Reynolds numbers (based on the free-stream oscillatory flow amplitude) between 1 and 150, and Strouhal numbers between 1 and 1000. Sherwood numbers were computed and their dependency on the flow frequency and amplitude discussed. An assessment of the validity of the quasi-steady assumption for mass transfer is based on these results.

  2. Fully coupled six-dimensional calculations of the water dimer vibration-rotation-tunneling states with split Wigner pseudospectral approach. II. Improvements and tests of additional potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fellers, R.S.; Braly, L.B.; Saykally, R.J.

    The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data. {copyright} {ital 1999 American Institute of Physics.}

  3. Turbulence statistics with quantified uncertainty in cold-wall supersonic channel flow

    NASA Astrophysics Data System (ADS)

    Ulerich, Rhys; Moser, Robert D.

    2012-11-01

    To investigate compressibility effects in wall-bounded turbulence, a series of direct numerical simulations of compressible channel flow with isothermal (cold) walls have been conducted. All combinations of Re = { 3000 , 5000 } and Ma = { 0 . 1 , 0 . 5 , 1 . 5 , 3 . 0 } have been simulated where the Reynolds and Mach numbers are based on bulk velocity and sound speed at the wall temperature. Turbulence statistics with precisely quantified uncertainties computed from these simulations will be presented and are being made available in a public data base at http://turbulence.ices.utexas.edu/. The simulations were performed using a new pseudo-spectral code called Suzerain, which was designed to efficiently produce high quality data on compressible, wall-bounded turbulent flows using a semi-implicit Fourier/B-spline numerical formulation. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].

  4. A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1988-01-01

    Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.

  5. Flow Cytometry Data Preparation Guidelines for Improved Automated Phenotypic Analysis.

    PubMed

    Jimenez-Carretero, Daniel; Ligos, José M; Martínez-López, María; Sancho, David; Montoya, María C

    2018-05-15

    Advances in flow cytometry (FCM) increasingly demand adoption of computational analysis tools to tackle the ever-growing data dimensionality. In this study, we tested different data input modes to evaluate how cytometry acquisition configuration and data compensation procedures affect the performance of unsupervised phenotyping tools. An analysis workflow was set up and tested for the detection of changes in reference bead subsets and in a rare subpopulation of murine lymph node CD103 + dendritic cells acquired by conventional or spectral cytometry. Raw spectral data or pseudospectral data acquired with the full set of available detectors by conventional cytometry consistently outperformed datasets acquired and compensated according to FCM standards. Our results thus challenge the paradigm of one-fluorochrome/one-parameter acquisition in FCM for unsupervised cluster-based analysis. Instead, we propose to configure instrument acquisition to use all available fluorescence detectors and to avoid integration and compensation procedures, thereby using raw spectral or pseudospectral data for improved automated phenotypic analysis. Copyright © 2018 by The American Association of Immunologists, Inc.

  6. Navier-Stokes solution on the CYBER-203 by a pseudospectral technique

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J.; Hussaini, M. Y.; Bokhari, S.; Orszag, S. A.

    1983-01-01

    A three-level, time-split, mixed spectral/finite difference method for the numerical solution of the three-dimensional, compressible Navier-Stokes equations has been developed and implemented on the Control Data Corporation (CDC) CYBER-203. This method uses a spectral representation for the flow variables in the streamwise and spanwise coordinates, and central differences in the normal direction. The five dependent variables are interleaved one horizontal plane at a time and the array of their values at the grid points of each horizontal plane is a typical vector in the computation. The code is organized so as to require, per time step, a single forward-backward pass through the entire data base. The one-and two-dimensional Fast Fourier Transforms are performed using software especially developed for the CYBER-203.

  7. 2.5-D poroelastic wave modelling in double porosity media

    NASA Astrophysics Data System (ADS)

    Liu, Xu; Greenhalgh, Stewart; Wang, Yanghua

    2011-09-01

    To approximate seismic wave propagation in double porosity media, the 2.5-D governing equations of poroelastic waves are developed and numerically solved. The equations are obtained by taking a Fourier transform in the strike or medium-invariant direction over all of the field quantities in the 3-D governing equations. The new memory variables from the Zener model are suggested as a way to represent the sum of the convolution integrals for both the solid particle velocity and the macroscopic fluid flux in the governing equations. By application of the memory equations, the field quantities at every time step need not be stored. However, this approximation allows just two Zener relaxation times to represent the very complex double porosity and dual permeability attenuation mechanism, and thus reduce the difficulty. The 2.5-D governing equations are numerically solved by a time-splitting method for the non-stiff parts and an explicit fourth-order Runge-Kutta method for the time integration and a Fourier pseudospectral staggered-grid for handling the spatial derivative terms. The 2.5-D solution has the advantage of producing a 3-D wavefield (point source) for a 2-D model but is much more computationally efficient than the full 3-D solution. As an illustrative example, we firstly show the computed 2.5-D wavefields in a homogeneous single porosity model for which we reformulated an analytic solution. Results for a two-layer, water-saturated double porosity model and a laterally heterogeneous double porosity structure are also presented.

  8. Bowen-York trumpet data and black-hole simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hannam, Mark; Murchadha, Niall O; Husa, Sascha

    2009-12-15

    The most popular method to construct initial data for black-hole-binary simulations is the puncture method, in which compactified wormholes are given linear and angular momentum via the Bowen-York extrinsic curvature. When these data are evolved, they quickly approach a trumpet topology, suggesting that it would be preferable to use data that are in trumpet form from the outset. To achieve this, we extend the puncture method to allow the construction of Bowen-York trumpets, including an outline of an existence and uniqueness proof of the solutions. We construct boosted, spinning and binary Bowen-York puncture trumpets using a single-domain pseudospectral elliptic solver,more » and evolve the binary data and compare with standard wormhole-data results. We also show that for boosted trumpets the black-hole mass can be prescribed a priori, without recourse to the iterative procedure that is necessary for wormhole data.« less

  9. Pseudospectra in non-Hermitian quantum mechanics

    NASA Astrophysics Data System (ADS)

    Krejčiřík, D.; Siegl, P.; Tater, M.; Viola, J.

    2015-10-01

    We propose giving the mathematical concept of the pseudospectrum a central role in quantum mechanics with non-Hermitian operators. We relate pseudospectral properties to quasi-Hermiticity, similarity to self-adjoint operators, and basis properties of eigenfunctions. The abstract results are illustrated by unexpected wild properties of operators familiar from PT -symmetric quantum mechanics.

  10. Statistics for laminar flamelet modeling

    NASA Technical Reports Server (NTRS)

    Cant, R. S.; Rutland, C. J.; Trouve, A.

    1990-01-01

    Statistical information required to support modeling of turbulent premixed combustion by laminar flamelet methods is extracted from a database of the results of Direct Numerical Simulation of turbulent flames. The simulations were carried out previously by Rutland (1989) using a pseudo-spectral code on a three dimensional mesh of 128 points in each direction. One-step Arrhenius chemistry was employed together with small heat release. A framework for the interpretation of the data is provided by the Bray-Moss-Libby model for the mean turbulent reaction rate. Probability density functions are obtained over surfaces of the constant reaction progress variable for the tangential strain rate and the principal curvature. New insights are gained which will greatly aid the development of modeling approaches.

  11. A Novel Approach with Time-Splitting Spectral Technique for the Coupled Schrödinger-Boussinesq Equations Involving Riesz Fractional Derivative

    NASA Astrophysics Data System (ADS)

    Saha Ray, S.

    2017-09-01

    In the present paper the Riesz fractional coupled Schrödinger-Boussinesq (S-B) equations have been solved by the time-splitting Fourier spectral (TSFS) method. This proposed technique is utilized for discretizing the Schrödinger like equation and further, a pseudospectral discretization has been employed for the Boussinesq-like equation. Apart from that an implicit finite difference approach has also been proposed to compare the results with the solutions obtained from the time-splitting technique. Furthermore, the time-splitting method is proved to be unconditionally stable. The error norms along with the graphical solutions have also been presented here. Supported by NBHM, Mumbai, under Department of Atomic Energy, Government of India vide Grant No. 2/48(7)/2015/NBHM (R.P.)/R&D II/11403

  12. Low-Thrust Transfers from Distant Retrograde Orbits to L2 Halo Orbits in the Earth-Moon System

    NASA Technical Reports Server (NTRS)

    Parrish, Nathan L.; Parker, Jeffrey S.; Hughes, Steven P.; Heiligers, Jeannette

    2016-01-01

    This paper presents a study of transfers between distant retrograde orbits (DROs) and L2 halo orbits in the Earth-Moon system that could be flown by a spacecraft with solar electric propulsion (SEP). Two collocation-based optimal control methods are used to optimize these highly-nonlinear transfers: Legendre pseudospectral and Hermite-Simpson. Transfers between DROs and halo orbits using low-thrust propulsion have not been studied previously. This paper offers a study of several families of trajectories, parameterized by the number of orbital revolutions in a synodic frame. Even with a poor initial guess, a method is described to reliably generate families of solutions. The circular restricted 3-body problem (CRTBP) is used throughout the paper so that the results are autonomous and simpler to understand.

  13. LARGE-SCALE SIMULATIONS OF ELECTROMAGNETIC AND ACOUSTIC MEASUREMENTS USING THE PSEUDOSPECTRAL TIME-DOMAIN (PSTD) ALGORITHM.(37) 2:917-926. (R825225)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  15. Finite-temperature effects in helical quantum turbulence

    NASA Astrophysics Data System (ADS)

    Clark Di Leoni, Patricio; Mininni, Pablo D.; Brachet, Marc E.

    2018-04-01

    We perform a study of the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the stochastic Ginzburg-Landau equations, using up to 40963 grid points with a pseudospectral method. We show that for temperatures close to the critical one, the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide Ansätze for the effective viscosity and friction as a function of the temperature.

  16. A computer-assisted study of pulse dynamics in anisotropic media

    NASA Astrophysics Data System (ADS)

    Krishnan, J.; Engelborghs, K.; Bär, M.; Lust, K.; Roose, D.; Kevrekidis, I. G.

    2001-06-01

    This study focuses on the computer-assisted stability analysis of travelling pulse-like structures in spatially periodic heterogeneous reaction-diffusion media. The physical motivation comes from pulse propagation in thin annular domains on a diffusionally anisotropic catalytic surface. The study was performed by computing the travelling pulse-like structures as limit cycles of the spatially discretized PDE, which in turn is performed in two ways: a Newton method based on a pseudospectral discretization of the PDE, and a Newton-Picard method based on a finite difference discretization. Details about the spectra of these modulated pulse-like structures are discussed, including how they may be compared with the spectra of pulses in homogeneous media. The effects of anisotropy on the dynamics of pulses and pulse pairs are studied. Beyond shifting the location of bifurcations present in homogeneous media, anisotropy can also introduce certain new instabilities.

  17. Large eddy simulation of incompressible turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Moin, P.; Reynolds, W. C.; Ferziger, J. H.

    1978-01-01

    The three-dimensional, time-dependent primitive equations of motion were numerically integrated for the case of turbulent channel flow. A partially implicit numerical method was developed. An important feature of this scheme is that the equation of continuity is solved directly. The residual field motions were simulated through an eddy viscosity model, while the large-scale field was obtained directly from the solution of the governing equations. An important portion of the initial velocity field was obtained from the solution of the linearized Navier-Stokes equations. The pseudospectral method was used for numerical differentiation in the horizontal directions, and second-order finite-difference schemes were used in the direction normal to the walls. The large eddy simulation technique is capable of reproducing some of the important features of wall-bounded turbulent flows. The resolvable portions of the root-mean square wall pressure fluctuations, pressure velocity-gradient correlations, and velocity pressure-gradient correlations are documented.

  18. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    NASA Astrophysics Data System (ADS)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  19. Quadrature imposition of compatibility conditions in Chebyshev methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Streett, C. L.

    1990-01-01

    Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.

  20. Matrix eigenvalue method for free-oscillations modelling of spherical elastic bodies

    NASA Astrophysics Data System (ADS)

    Zábranová, E.; Hanyk, L.; Matyska, C.

    2017-11-01

    Deformations and changes of the gravitational potential of pre-stressed self-gravitating elastic bodies caused by free oscillations are described by means of the momentum and Poisson equations and the constitutive relation. For spherically symmetric bodies, the equations and boundary conditions are transformed into ordinary differential equations of the second order by the spherical harmonic decomposition and further discretized by highly accurate pseudospectral difference schemes on Chebyshev grids; we pay special attention to the conditions at the centre of the models. We thus obtain a series of matrix eigenvalue problems for eigenfrequencies and eigenfunctions of the free oscillations. Accuracy of the presented numerical approach is tested by means of the Rayleigh quotients calculated for the eigenfrequencies up to 500 mHz. Both the modal frequencies and eigenfunctions are benchmarked against the output from the Mineos software package based on shooting methods. The presented technique is a promising alternative to widely used methods because it is stable and with a good capability up to high frequencies.

  1. Evolution of inviscid Kelvin-Helmholtz instability from a piecewise linear shear layer

    NASA Astrophysics Data System (ADS)

    Guha, Anirban; Rahmani, Mona; Lawrence, Gregory

    2012-11-01

    Here we study the evolution of 2D, inviscid Kelvin-Helmholtz instability (KH) ensuing from a piecewise linear shear layer. Although KH pertaining to smooth shear layers (eg. Hyperbolic tangent profile) has been thorough investigated in the past, very little is known about KH resulting from sharp shear layers. Pozrikidis and Higdon (1985) have shown that piecewise shear layer evolves into elliptical vortex patches. This non-linear state is dramatically different from the well known spiral-billow structure of KH. In fact, there is a little acknowledgement that elliptical vortex patches can represent non-linear KH. In this work, we show how such patches evolve through the interaction of vorticity waves. Our work is based on two types of computational methods (i) Contour Dynamics: a boundary-element method which tracks the evolution of the contour of a vortex patch using Lagrangian marker points, and (ii) Direct Numerical Simulation (DNS): an Eulerian pseudo-spectral method heavily used in studying hydrodynamic instability and turbulence.

  2. The fifth-order partial differential equation for the description of the α + β Fermi-Pasta-Ulam model

    NASA Astrophysics Data System (ADS)

    Kudryashov, Nikolay A.; Volkov, Alexandr K.

    2017-01-01

    We study a new nonlinear partial differential equation of the fifth order for the description of perturbations in the Fermi-Pasta-Ulam mass chain. This fifth-order equation is an expansion of the Gardner equation for the description of the Fermi-Pasta-Ulam model. We use the potential of interaction between neighbouring masses with both quadratic and cubic terms. The equation is derived using the continuous limit. Unlike the previous works, we take into account higher order terms in the Taylor series expansions. We investigate the equation using the Painlevé approach. We show that the equation does not pass the Painlevé test and can not be integrated by the inverse scattering transform. We use the logistic function method and the Laurent expansion method to find travelling wave solutions of the fifth-order equation. We use the pseudospectral method for the numerical simulation of wave processes, described by the equation.

  3. Fast neural solution of a nonlinear wave equation

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1992-01-01

    A neural algorithm for rapidly simulating a certain class of nonlinear wave phenomena using analog VLSI neural hardware is presented and applied to the Korteweg-de Vries partial differential equation. The corresponding neural architecture is obtained from a pseudospectral representation of the spatial dependence, along with a leap-frog scheme for the temporal evolution. Numerical simulations demonstrated the robustness of the proposed approach.

  4. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  5. Reynolds Stress Balance in Plane Wakes Subjected to Irrotational Strains

    NASA Technical Reports Server (NTRS)

    Rogers, Miichael M.; Merriam, Marshal (Technical Monitor)

    1997-01-01

    Direct numerical simulations of time-evolving turbulent plane wakes developing in the presence of various irrotational plane strains have been generated. A pseudospectral numerical method with up to 25 million modes is used to solve the equations in a reference frame moving with the irrotational strain. The initial condition for each simulation is taken from a previous turbulent self-similar plane wake direct numerical simulation at a velocity deficit Reynolds number, R(sub e), of about 2,000. All the terms in the equations governing the evolution of the Reynolds stresses have been calculated. The relative importance of the various terms is examined for the different strain geometries and the behavior of the individual terms is used to better assess whether the strained wakes are evolving self-similarly.

  6. Effect of rotation rate on the forces of a rotating cylinder: Simulation and control

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Ou, Yuh-Roung

    1993-01-01

    In this paper we present numerical solutions to several optimal control problems for an unsteady viscous flow. The main thrust of this work is devoted to simulation and control of an unsteady flow generated by a circular cylinder undergoing rotary motion. By treating the rotation rate as a control variable, we can formulate two optimal control problems and use a central difference/pseudospectral transform method to numerically compute the optimal control rates. Several types of rotations are considered as potential controls, and we show that a proper synchronization of forcing frequency with the natural vortex shedding frequency can greatly influence the flow. The results here indicate that using moving boundary controls for such systems may provide a feasible mechanism for flow control.

  7. Unsteady three-dimensional marginal separation, including breakdown

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1990-01-01

    A situation involving a three-dimensional marginal separation is considered, where a (steady) boundary layer flow is on the verge of separating at a point (located along a line of symmetry/centerline). At this point, a triple-deck is included, thereby permitting a small amount of interaction to occur. Unsteadiness is included within this interaction region through some external means. It is shown that the problem reduces to the solution of a nonlinear, unsteady, partial-integro system, which is solved numerically by means of time-marching together with a pseudo-spectral method spatially. A number of solutions to this system are presented which strongly suggest a breakdown of this system may occur, at a finite spatial position, at a finite time. The structure and details of this breakdown are then described.

  8. AxiSEM3D: broadband seismic wavefields in 3-D aspherical Earth models

    NASA Astrophysics Data System (ADS)

    Leng, K.; Nissen-Meyer, T.; Zad, K. H.; van Driel, M.; Al-Attar, D.

    2017-12-01

    Seismology is the primary tool for data-informed inference of Earth structure and dynamics. Simulating seismic wave propagation at a global scale is fundamental to seismology, but remains as one of most challenging problems in scientific computing, because of both the multiscale nature of Earth's interior and the observable frequency band of seismic data. We present a novel numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. Our method, named AxiSEM3D, is a hybrid of spectral element method and pseudospectral method. It reduces the azimuthal dimension of wavefields by means of a global Fourier series parameterization, of which the number of terms can be locally adapted to the inherent azimuthal smoothness of the wavefields. AxiSEM3D allows not only for material heterogeneities, such as velocity, density, anisotropy and attenuation, but also for finite undulations on radial discontinuities, both solid-solid and solid-fluid, and thereby a variety of aspherical Earth features such as ellipticity, topography, variable crustal thickness, and core-mantle boundary topography. Such interface undulations are equivalently interpreted as material perturbations of the contiguous media, based on the "particle relabelling transformation". Efficiency comparisons show that AxiSEM3D can be 1 to 3 orders of magnitude faster than conventional 3-D methods, with the speedup increasing with simulation frequency and decreasing with model complexity, but for all realistic structures the speedup remains at least one order of magnitude. The observable frequency range of global seismic data (up to 1 Hz) has been covered for wavefield modelling upon a 3-D Earth model with reasonable computing resources. We show an application of surface wave modelling within a state-of-the-art global crustal model (Crust1.0), with the synthetics compared to real data. The high-performance C++ code is released at github.com/AxiSEM3D/AxiSEM3D.

  9. Two-craft Coulomb formation study about circular orbits and libration points

    NASA Astrophysics Data System (ADS)

    Inampudi, Ravi Kishore

    This dissertation investigates the dynamics and control of a two-craft Coulomb formation in circular orbits and at libration points; it addresses relative equilibria, stability and optimal reconfigurations of such formations. The relative equilibria of a two-craft tether formation connected by line-of-sight elastic forces moving in circular orbits and at libration points are investigated. In circular Earth orbits and Earth-Moon libration points, the radial, along-track, and orbit normal great circle equilibria conditions are found. An example of modeling the tether force using Coulomb force is discussed. Furthermore, the non-great-circle equilibria conditions for a two-spacecraft tether structure in circular Earth orbit and at collinear libration points are developed. Then the linearized dynamics and stability analysis of a 2-craft Coulomb formation at Earth-Moon libration points are studied. For orbit-radial equilibrium, Coulomb forces control the relative distance between the two satellites. The gravity gradient torques on the formation due to the two planets help stabilize the formation. Similar analysis is performed for along-track and orbit-normal relative equilibrium configurations. Where necessary, the craft use a hybrid thrusting-electrostatic actuation system. The two-craft dynamics at the libration points provide a general framework with circular Earth orbit dynamics forming a special case. In the presence of differential solar drag perturbations, a Lyapunov feedback controller is designed to stabilize a radial equilibrium, two-craft Coulomb formation at collinear libration points. The second part of the thesis investigates optimal reconfigurations of two-craft Coulomb formations in circular Earth orbits by applying nonlinear optimal control techniques. The objective of these reconfigurations is to maneuver the two-craft formation between two charged equilibria configurations. The reconfiguration of spacecraft is posed as an optimization problem using the calculus of variations approach. The optimality criteria are minimum time, minimum acceleration of the separation distance, minimum Coulomb and electric propulsion fuel usage, and minimum electrical power consumption. The continuous time problem is discretized using a pseudospectral method, and the resulting finite dimensional problem is solved using a sequential quadratic programming algorithm. The software package, DIDO, implements this approach. This second part illustrates how pseudospectral methods significantly simplify the solution-finding process.

  10. Investigation of the viscous reconnection phenomenon of two vortex tubes through spectral simulations

    NASA Astrophysics Data System (ADS)

    Beardsell, Guillaume; Dufresne, Louis; Dumas, Guy

    2016-09-01

    This paper aims to shed further light on the viscous reconnection phenomenon. To this end, we propose a robust and efficient method in order to quantify the degree of reconnection of two vortex tubes. This method is used to compare the evolutions of two simple initial vortex configurations: orthogonal and antiparallel. For the antiparallel configuration, the proposed method is compared with alternative estimators and it is found to improve accuracy since it can account properly for the formation of looping structures inside the domain. This observation being new, the physical mechanism for the formation of those looping structures is discussed. For the orthogonal configuration, we report results from simulations that were performed at a much higher vortex Reynolds number (ReΓ ≡ circulation/viscosity = 104) and finer resolution (N3 = 10243) than previously presented in the literature. The incompressible Navier-stokes equations are solved directly (Direct Numerical Simulation or DNS) using a Fourier pseudospectral algorithm with triply periodic boundary conditions. The associated zero-circulation constraint is circumvented by solving the governing equations in a proper rotating frame of reference. Using ideas similar to those behind our method to compute the degree of reconnection, we split the vorticity field into its reconnected and non-reconnected parts, which allows to create insightful visualizations of the evolving vortex topology. It also allows to detect regions in the vorticity field that are neither reconnected nor non-reconnected and thus must be associated to internal looping structures. Finally, the Reynolds number dependence of the reconnection time scale Trec is investigated in the range 500 ≤ ReΓ ≤ 10 000. For both initial configurations, the scaling is generally found to vary continuously as ReΓ is increased from T rec ˜ R eΓ - 1 to T rec ˜ R eΓ - 1 / 2 , thus providing quantitative support for previous claims that the reconnection physics of two vortices should be similar regardless of their spatial arrangement.

  11. An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-06-01

    Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory

  12. Inverse design of bulk morphologies in block copolymers using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Khadilkar, Mihir; Delaney, Kris; Fredrickson, Glenn

    Multiblock polymers are a versatile platform for creating a large range of nanostructured materials with novel morphologies and properties. However, achieving desired structures or property combinations is difficult due to a vast design space comprised of parameters including monomer species, block sequence, block molecular weights and dispersity, copolymer architecture, and binary interaction parameters. Navigating through such vast design spaces to achieve an optimal formulation for a target structure or property set requires an efficient global optimization tool wrapped around a forward simulation technique such as self-consistent field theory (SCFT). We report on such an inverse design strategy utilizing particle swarm optimization (PSO) as the global optimizer and SCFT as the forward prediction engine. To avoid metastable states in forward prediction, we utilize pseudo-spectral variable cell SCFT initiated from a library of defect free seeds of known block copolymer morphologies. We demonstrate that our approach allows for robust identification of block copolymers and copolymer alloys that self-assemble into a targeted structure, optimizing parameters such as block fractions, blend fractions, and Flory chi parameters.

  13. Designing with non-linear viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Schuh, Jonathon; Lee, Yong Hoon; Allison, James; Ewoldt, Randy

    2017-11-01

    Material design is typically limited to hard materials or simple fluids; however, design with more complex materials can provide ways to enhance performance. Using the Criminale-Ericksen-Filbey (CEF) constitutive model in the thin film lubrication limit, we derive a modified Reynolds Equation (based on asymptotic analysis) that includes shear thinning, first normal stress, and terminal regime viscoelastic effects. This allows for designing non-linear viscoelastic fluids in thin-film creeping flow scenarios, i.e. optimizing the shape of rheological material properties to achieve different design objectives. We solve the modified Reynolds equation using the pseudo-spectral method, and describe a case study in full-film lubricated sliding where optimal fluid properties are identified. These material-agnostic property targets can then guide formulation of complex fluids which may use polymeric, colloidal, or other creative approaches to achieve the desired non-Newtonian properties.

  14. Force-free electrodynamics in dynamical curved spacetimes

    NASA Astrophysics Data System (ADS)

    McWilliams, Sean

    2015-04-01

    We present results on our study of force-free electrodynamics in curved spacetimes. Specifically, we present several improvements to what has become the established set of evolution equations, and we apply these to study the nonlinear stability of analytically known force-free solutions for the first time. We implement our method in a new pseudo-spectral code built on top of the SpEC code for evolving dynamic spacetimes. Finally, we revisit these known solutions and attempt to clarify some interesting properties that render them analytically tractable. Finally, we preview some new work that similarly revisits the established approach to solving another problem in numerical relativity: the post-merger recoil from asymmetric gravitational-wave emission. These new results may have significant implications for the parameter dependence of recoils, and consequently on the statistical expectations for recoil velocities of merged systems.

  15. Analysis of the low gravity tolerance of Bridgman-Stockbarger crystal growth. I - Steady and impulse accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Ouazzani, Jalil; Rosenberger, Franz

    1989-01-01

    The effects of steady and impulse-type residual accelerations on dopant distributions during directional solidification in 2D and 3D 'generic' models of the Bridgman-Stockbarger technique are investigated using numerical methods. The calculations are based on the thermophysical properties of molten germanium doped with a low concentration of gallium. A Chebyshev collocation pseudospectral method is used for the solution of the governing momentum-, mass-, species-, and heat-transfer equations. Only convection caused by temperature gradients is considered. It is found that lateral nonuniformity in composition is very sensitive to the orientation of the steady component of the residual gravity vector and to the particular operating conditions under consideration. It is also found that laterally or radially averaged composition profiles are alone insufficient to describe the extent of residual convection in a spacecraft environment. The effects of impulse-type disturbances can be severe and can extend for times on the order of 1000 sec after the termination of the impulse.

  16. A fast platform for simulating semi-flexible fiber suspensions applied to cell mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazockdast, Ehssan, E-mail: ehssan@cims.nyu.edu; Center for Computational Biology, Simons Foundation, New York, NY 10010; Rahimian, Abtin, E-mail: arahimian@acm.org

    We present a novel platform for the large-scale simulation of three-dimensional fibrous structures immersed in a Stokesian fluid and evolving under confinement or in free-space in three dimensions. One of the main motivations for this work is to study the dynamics of fiber assemblies within biological cells. For this, we also incorporate the key biophysical elements that determine the dynamics of these assemblies, which include the polymerization and depolymerization kinetics of fibers, their interactions with molecular motors and other objects, their flexibility, and hydrodynamic coupling. This work, to our knowledge, is the first technique to include many-body hydrodynamic interactions (HIs),more » and the resulting fluid flows, in cellular assemblies of flexible fibers. We use non-local slender body theory to compute the fluid–structure interactions of the fibers and a second-kind boundary integral formulation for other rigid bodies and the confining boundary. A kernel-independent implementation of the fast multipole method is utilized for efficient evaluation of HIs. The deformation of the fibers is described by nonlinear Euler–Bernoulli beam theory and their polymerization is modeled by the reparametrization of the dynamic equations in the appropriate non-Lagrangian frame. We use a pseudo-spectral representation of fiber positions and implicit time-stepping to resolve large fiber deformations, and to allow time-steps not excessively constrained by temporal stiffness or fiber–fiber interactions. The entire computational scheme is parallelized, which enables simulating assemblies of thousands of fibers. We use our method to investigate two important questions in the mechanics of cell division: (i) the effect of confinement on the hydrodynamic mobility of microtubule asters; and (ii) the dynamics of the positioning of mitotic spindle in complex cell geometries. Finally to demonstrate the general applicability of the method, we simulate the sedimentation of a cloud of semi-flexible fibers.« less

  17. A fast platform for simulating semi-flexible fiber suspensions applied to cell mechanics

    NASA Astrophysics Data System (ADS)

    Nazockdast, Ehssan; Rahimian, Abtin; Zorin, Denis; Shelley, Michael

    2017-01-01

    We present a novel platform for the large-scale simulation of three-dimensional fibrous structures immersed in a Stokesian fluid and evolving under confinement or in free-space in three dimensions. One of the main motivations for this work is to study the dynamics of fiber assemblies within biological cells. For this, we also incorporate the key biophysical elements that determine the dynamics of these assemblies, which include the polymerization and depolymerization kinetics of fibers, their interactions with molecular motors and other objects, their flexibility, and hydrodynamic coupling. This work, to our knowledge, is the first technique to include many-body hydrodynamic interactions (HIs), and the resulting fluid flows, in cellular assemblies of flexible fibers. We use non-local slender body theory to compute the fluid-structure interactions of the fibers and a second-kind boundary integral formulation for other rigid bodies and the confining boundary. A kernel-independent implementation of the fast multipole method is utilized for efficient evaluation of HIs. The deformation of the fibers is described by nonlinear Euler-Bernoulli beam theory and their polymerization is modeled by the reparametrization of the dynamic equations in the appropriate non-Lagrangian frame. We use a pseudo-spectral representation of fiber positions and implicit time-stepping to resolve large fiber deformations, and to allow time-steps not excessively constrained by temporal stiffness or fiber-fiber interactions. The entire computational scheme is parallelized, which enables simulating assemblies of thousands of fibers. We use our method to investigate two important questions in the mechanics of cell division: (i) the effect of confinement on the hydrodynamic mobility of microtubule asters; and (ii) the dynamics of the positioning of mitotic spindle in complex cell geometries. Finally to demonstrate the general applicability of the method, we simulate the sedimentation of a cloud of semi-flexible fibers.

  18. Investigation of electric charge on inertial particle dynamics in turbulence

    NASA Astrophysics Data System (ADS)

    Lu, Jiang; Shaw, Raymond

    2014-11-01

    The behavior of electrically charged, inertial particles in homogeneous, isotropic turbulence is investigated. Both like-charged and oppositely-charged particle interactions are considered. Direct numerical simulations (DNS) of turbulence in a periodic box using the pseudospectral numerical method are performed, with Lagrangian tracking of the particles. We study effects of mutual electrostatic repulsion and attraction on the particle dynamics, as quantified by the radial distribution function (RDF) and the radial relative velocity. For the like-charged particle case, the Coulomb force leads to a short range repulsion behavior and an RDF reminiscent of that for a dilute gas. For the oppositely-charged particle case, the Coulomb force increases the RDF beyond that already occurring for neutral inertial particles. For both cases, the relative velocities are calculated as a function of particle separation distance and show distinct deviations from the expected scaling within the dissipation range. This research was supported by NASA Grant NNX113AF90G.

  19. A validated computational model for the design of surface textures in full-film lubricated sliding

    NASA Astrophysics Data System (ADS)

    Schuh, Jonathon; Lee, Yong Hoon; Allison, James; Ewoldt, Randy

    2016-11-01

    Our recent experimental work showed that asymmetry is needed for surface textures to decrease friction in full-film lubricated sliding (thrust bearings) with Newtonian fluids; textures reduce the shear load and produce a separating normal force. The sign of the separating normal force is not predicted by previous 1-D theories. Here we model the flow with the Reynolds equation in cylindrical coordinates, numerically implemented with a pseudo-spectral method. The model predictions match experiments, rationalize the sign of the normal force, and allow for design of surface texture geometry. To minimize sliding friction with angled cylindrical textures, an optimal angle of asymmetry β exists. The optimal angle depends on the film thickness but not the sliding velocity within the applicable range of the model. The model has also been used to optimize generalized surface texture topography while satisfying manufacturability constraints.

  20. Experimental and numerical investigation of development of disturbances in the boundary layer on sharp and blunted cone

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Bountin, D. A.; Gromyko, Yu. V.; Khotyanovsky, D. V.; Kudryavtsev, A. N.

    2016-10-01

    Development of disturbances in the supersonic boundary layer on sharp and blunted cones is studied both experimentally and theoretically. The experiments were conducted at the Transit-M hypersonic wind tunnel of the Institute of Theoretical and Applied Mechanics. Linear stability calculations use the basic flow profiles provided by the numerical simulations performed by solving the Navier-Stokes equations with the ANSYS Fluent and the in-house CFS3D code. Both the global pseudospectral Chebyshev method and the local iteration procedure are employed to solve the eigenvalue problem and determine linear stability characteristics. The calculated amplification factors for disturbances of various frequencies are compared with the experimentally measured pressure fluctuation spectra at different streamwise positions. It is shown that the linear stability calculations predict quite accurately the frequency of the most amplified disturbances and enable us to estimate reasonably well their relative amplitudes.

  1. Impacts of Ocean Waves on the Atmospheric Surface Layer: Simulations and Observations

    DTIC Science & Technology

    2008-06-06

    energy and pressure described in § 4 are solved using a mixed finite - difference pseudospectral scheme with a third-order Runge-Kutta time stepping with a...to that in our DNS code (Sullivan and McWilliams 2002; Sullivan et al. 2000). For our mixed finite - difference pseudospec- tral differencing scheme a...Poisson equation. The spatial discretization is pseu- dospectral along lines of constant or and second- order finite difference in the vertical

  2. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    DTIC Science & Technology

    2011-09-01

    artificially creating enough baseline to enable triangulation. This motion comes at the expense of the primary mission, unless the entire purpose...control of a sUAS for surveillance and other mis-sions. Completely autonomous UAS control for surveillance missions is still an on-the-horizon...work, xapp, was correspondingly set to 2-m. Since the test platform for the algorithm was a helicopter vice a fixed-wing UAS , an aggressive flare segment

  3. 3-D numerical simulations of earthquake ground motion in sedimentary basins: testing accuracy through stringent models

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Hollender, Fabrice; Bard, Pierre-Yves; Priolo, Enrico; Klin, Peter; de Martin, Florent; Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei

    2015-04-01

    Differences between 3-D numerical predictions of earthquake ground motion in the Mygdonian basin near Thessaloniki, Greece, led us to define four canonical stringent models derived from the complex realistic 3-D model of the Mygdonian basin. Sediments atop an elastic bedrock are modelled in the 1D-sharp and 1D-smooth models using three homogeneous layers and smooth velocity distribution, respectively. The 2D-sharp and 2D-smooth models are extensions of the 1-D models to an asymmetric sedimentary valley. In all cases, 3-D wavefields include strongly dispersive surface waves in the sediments. We compared simulations by the Fourier pseudo-spectral method (FPSM), the Legendre spectral-element method (SEM) and two formulations of the finite-difference method (FDM-S and FDM-C) up to 4 Hz. The accuracy of individual solutions and level of agreement between solutions vary with type of seismic waves and depend on the smoothness of the velocity model. The level of accuracy is high for the body waves in all solutions. However, it strongly depends on the discrete representation of the material interfaces (at which material parameters change discontinuously) for the surface waves in the sharp models. An improper discrete representation of the interfaces can cause inaccurate numerical modelling of surface waves. For all the numerical methods considered, except SEM with mesh of elements following the interfaces, a proper implementation of interfaces requires definition of an effective medium consistent with the interface boundary conditions. An orthorhombic effective medium is shown to significantly improve accuracy and preserve the computational efficiency of modelling. The conclusions drawn from the analysis of the results of the canonical cases greatly help to explain differences between numerical predictions of ground motion in realistic models of the Mygdonian basin. We recommend that any numerical method and code that is intended for numerical prediction of earthquake ground motion should be verified through stringent models that would make it possible to test the most important aspects of accuracy.

  4. Simulation analysis of the transparency of cornea and sclera

    NASA Astrophysics Data System (ADS)

    Yang, Chih-Yao; Tseng, Snow H.

    2017-02-01

    Both consist of collagen fibrils, sclera is opaque whereas cornea is transparent for optical wavelengths. By employing the pseudospectral time-domain (PSTD) simulation technique, we model light impinging upon cornea and sclera, respectively. To analyze the scattering characteristics of light, the cornea and sclera are modeled by different sizes and arrangements of the non-absorbing collagen fibrils. Various factors are analyzed, including the wavelength of incident light, the thickness of the scattering media, position of the collagen fibrils, size distribution of the fibrils.

  5. Three-dimensional marginal separation

    NASA Technical Reports Server (NTRS)

    Duck, Peter W.

    1988-01-01

    The three dimensional marginal separation of a boundary layer along a line of symmetry is considered. The key equation governing the displacement function is derived, and found to be a nonlinear integral equation in two space variables. This is solved iteratively using a pseudo-spectral approach, based partly in double Fourier space, and partly in physical space. Qualitatively, the results are similar to previously reported two dimensional results (which are also computed to test the accuracy of the numerical scheme); however quantitatively the three dimensional results are much different.

  6. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  7. Implicit LES using adaptive filtering

    NASA Astrophysics Data System (ADS)

    Sun, Guangrui; Domaradzki, Julian A.

    2018-04-01

    In implicit large eddy simulations (ILES) numerical dissipation prevents buildup of small scale energy in a manner similar to the explicit subgrid scale (SGS) models. If spectral methods are used the numerical dissipation is negligible but it can be introduced by applying a low-pass filter in the physical space, resulting in an effective ILES. In the present work we provide a comprehensive analysis of the numerical dissipation produced by different filtering operations in a turbulent channel flow simulated using a non-dissipative, pseudo-spectral Navier-Stokes solver. The amount of numerical dissipation imparted by filtering can be easily adjusted by changing how often a filter is applied. We show that when the additional numerical dissipation is close to the subgrid-scale (SGS) dissipation of an explicit LES the overall accuracy of ILES is also comparable, indicating that periodic filtering can replace explicit SGS models. A new method is proposed, which does not require any prior knowledge of a flow, to determine the filtering period adaptively. Once an optimal filtering period is found, the accuracy of ILES is significantly improved at low implementation complexity and computational cost. The method is general, performing well for different Reynolds numbers, grid resolutions, and filter shapes.

  8. Assessing the capability of numerical methods to predict earthquake ground motion: the Euroseistest verification and validation project

    NASA Astrophysics Data System (ADS)

    Chaljub, E. O.; Bard, P.; Tsuno, S.; Kristek, J.; Moczo, P.; Franek, P.; Hollender, F.; Manakou, M.; Raptakis, D.; Pitilakis, K.

    2009-12-01

    During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predict earthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions (consistent with the ESG2006 exercise which targeted the Grenoble Valley). Diffractions off the basin edges and induced surface-wave propagation mainly contribute to differences between predictions. The differences are particularly large in the elastic models but remain important also in models with attenuation. In the validation, predictions are compared with the recordings by a local array of 19 surface and borehole accelerometers. The level of agreement is found event-dependent. For the largest-magnitude event the agreement is surprisingly good even at high frequencies.

  9. Discrete variable representation in electronic structure theory: quadrature grids for least-squares tensor hypercontraction.

    PubMed

    Parrish, Robert M; Hohenstein, Edward G; Martínez, Todd J; Sherrill, C David

    2013-05-21

    We investigate the application of molecular quadratures obtained from either standard Becke-type grids or discrete variable representation (DVR) techniques to the recently developed least-squares tensor hypercontraction (LS-THC) representation of the electron repulsion integral (ERI) tensor. LS-THC uses least-squares fitting to renormalize a two-sided pseudospectral decomposition of the ERI, over a physical-space quadrature grid. While this procedure is technically applicable with any choice of grid, the best efficiency is obtained when the quadrature is tuned to accurately reproduce the overlap metric for quadratic products of the primary orbital basis. Properly selected Becke DFT grids can roughly attain this property. Additionally, we provide algorithms for adopting the DVR techniques of the dynamics community to produce two different classes of grids which approximately attain this property. The simplest algorithm is radial discrete variable representation (R-DVR), which diagonalizes the finite auxiliary-basis representation of the radial coordinate for each atom, and then combines Lebedev-Laikov spherical quadratures and Becke atomic partitioning to produce the full molecular quadrature grid. The other algorithm is full discrete variable representation (F-DVR), which uses approximate simultaneous diagonalization of the finite auxiliary-basis representation of the full position operator to produce non-direct-product quadrature grids. The qualitative features of all three grid classes are discussed, and then the relative efficiencies of these grids are compared in the context of LS-THC-DF-MP2. Coarse Becke grids are found to give essentially the same accuracy and efficiency as R-DVR grids; however, the latter are built from explicit knowledge of the basis set and may guide future development of atom-centered grids. F-DVR is found to provide reasonable accuracy with markedly fewer points than either Becke or R-DVR schemes.

  10. The Multigrid-Mask Numerical Method for Solution of Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Ku, Hwar-Ching; Popel, Aleksander S.

    1996-01-01

    A multigrid-mask method for solution of incompressible Navier-Stokes equations in primitive variable form has been developed. The main objective is to apply this method in conjunction with the pseudospectral element method solving flow past multiple objects. There are two key steps involved in calculating flow past multiple objects. The first step utilizes only Cartesian grid points. This homogeneous or mask method step permits flow into the interior rectangular elements contained in objects, but with the restriction that the velocity for those Cartesian elements within and on the surface of an object should be small or zero. This step easily produces an approximate flow field on Cartesian grid points covering the entire flow field. The second or heterogeneous step corrects the approximate flow field to account for the actual shape of the objects by solving the flow field based on the local coordinates surrounding each object and adapted to it. The noise occurring in data communication between the global (low frequency) coordinates and the local (high frequency) coordinates is eliminated by the multigrid method when the Schwarz Alternating Procedure (SAP) is implemented. Two dimensional flow past circular and elliptic cylinders will be presented to demonstrate the versatility of the proposed method. An interesting phenomenon is found that when the second elliptic cylinder is placed in the wake of the first elliptic cylinder a traction force results in a negative drag coefficient.

  11. Fully pseudospectral solution of the conformally invariant wave equation near the cylinder at spacelike infinity. III: nonspherical Schwarzschild waves and singularities at null infinity

    NASA Astrophysics Data System (ADS)

    Frauendiener, Jörg; Hennig, Jörg

    2018-03-01

    We extend earlier numerical and analytical considerations of the conformally invariant wave equation on a Schwarzschild background from the case of spherically symmetric solutions, discussed in Frauendiener and Hennig (2017 Class. Quantum Grav. 34 045005), to the case of general, nonsymmetric solutions. A key element of our approach is the modern standard representation of spacelike infinity as a cylinder. With a decomposition into spherical harmonics, we reduce the four-dimensional wave equation to a family of two-dimensional equations. These equations can be used to study the behaviour at the cylinder, where the solutions turn out to have, in general, logarithmic singularities at infinitely many orders. We derive regularity conditions that may be imposed on the initial data, in order to avoid the first singular terms. We then demonstrate that the fully pseudospectral time evolution scheme can be applied to this problem leading to a highly accurate numerical reconstruction of the nonsymmetric solutions. We are particularly interested in the behaviour of the solutions at future null infinity, and we numerically show that the singularities spread to null infinity from the critical set, where the cylinder approaches null infinity. The observed numerical behaviour is consistent with similar logarithmic singularities found analytically on the critical set. Finally, we demonstrate that even solutions with singularities at low orders can be obtained with high accuracy by virtue of a coordinate transformation that converts solutions with logarithmic singularities into smooth solutions.

  12. seismo-live: Training in Computational Seismology using Jupyter Notebooks

    NASA Astrophysics Data System (ADS)

    Igel, H.; Krischer, L.; van Driel, M.; Tape, C.

    2016-12-01

    Practical training in computational methodologies is still underrepresented in Earth science curriculae despite the increasing use of sometimes highly sophisticated simulation technologies in research projects. At the same time well-engineered community codes make it easy to return simulation-based results yet with the danger that the inherent traps of numerical solutions are not well understood. It is our belief that training with highly simplified numerical solutions (here to the equations describing elastic wave propagation) with carefully chosen elementary ingredients of simulation technologies (e.g., finite-differencing, function interpolation, spectral derivatives, numerical integration) could substantially improve this situation. For this purpose we have initiated a community platform (www.seismo-live.org) where Python-based Jupyter notebooks can be accessed and run without and necessary downloads or local software installations. The increasingly popular Jupyter notebooks allow combining markup language, graphics, equations with interactive, executable python codes. We demonstrate the potential with training notebooks for the finite-difference method, pseudospectral methods, finite/spectral element methods, the finite-volume and the discontinuous Galerkin method. The platform already includes general Python training, introduction to the ObsPy library for seismology as well as seismic data processing and noise analysis. Submission of Jupyter notebooks for general seismology are encouraged. The platform can be used for complementary teaching in Earth Science courses on compute-intensive research areas.

  13. Studies in nonlinear problems of energy. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matkowsky, B.J.

    1998-12-01

    The author completed a successful research program on Nonlinear Problems of Energy, with emphasis on combustion and flame propagation. A total of 183 papers associated with the grant has appeared in the literature, and the efforts have twice been recognized by DOE`s Basic Science Division for Top Accomplishment. In the research program the author concentrated on modeling, analysis and computation of combustion phenomena, with particular emphasis on the transition from laminar to turbulent combustion. Thus he investigated the nonlinear dynamics and pattern formation in the successive stages of transition. He described the stability of combustion waves, and transitions to wavesmore » exhibiting progressively higher degrees of spatio-temporal complexity. Combustion waves are characterized by large activation energies, so that chemical reactions are significant only in thin layers, termed reaction zones. In the limit of infinite activation energy, the zones shrink to moving surfaces, termed fronts, which must be found during the course of the analysis, so that the problems are moving free boundary problems. The analytical studies were carried out for the limiting case with fronts, while the numerical studies were carried out for the case of finite, though large, activation energy. Accurate resolution of the solution in the reaction zone(s) is essential, otherwise false predictions of dynamical behavior are possible. Since the reaction zones move, and their location is not known a-priori, the author has developed adaptive pseudo-spectral methods, which have proven to be very useful for the accurate, efficient computation of solutions of combustion, and other, problems. The approach is based on a combination of analytical and numerical methods. The numerical computations built on and extended the information obtained analytically. Furthermore, the solutions obtained analytically served as benchmarks for testing the accuracy of the solutions determined computationally. Finally, the computational results suggested new analysis to be considered. A cumulative list of publications citing the grant make up the contents of this report.« less

  14. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  15. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  16. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  17. A numerical and experimental study on the nonlinear evolution of long-crested irregular waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701

    2011-01-15

    The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less

  18. Semi-discrete approximations to nonlinear systems of conservation laws; consistency and L(infinity)-stability imply convergence

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1988-01-01

    A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusion into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).

  19. Semi-discrete approximations to nonlinear systems of conservation laws; consistency and L(infinity)-stability imply convergence. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tadmor, E.

    1988-07-01

    A convergence theory for semi-discrete approximations to nonlinear systems of conservation laws is developed. It is shown, by a series of scalar counter-examples, that consistency with the conservation law alone does not guarantee convergence. Instead, a notion of consistency which takes into account both the conservation law and its augmenting entropy condition is introduced. In this context it is concluded that consistency and L(infinity)-stability guarantee for a relevant class of admissible entropy functions, that their entropy production rate belongs to a compact subset of H(loc)sup -1 (x,t). One can now use compensated compactness arguments in order to turn this conclusionmore » into a convergence proof. The current state of the art for these arguments includes the scalar and a wide class of 2 x 2 systems of conservation laws. The general framework of the vanishing viscosity method is studied as an effective way to meet the consistency and L(infinity)-stability requirements. How this method is utilized to enforce consistency and stability for scalar conservation laws is shown. In this context we prove, under the appropriate assumptions, the convergence of finite difference approximations (e.g., the high resolution TVD and UNO methods), finite element approximations (e.g., the Streamline-Diffusion methods) and spectral and pseudospectral approximations (e.g., the Spectral Viscosity methods).« less

  20. Simulation of nonlinear propagation of biomedical ultrasound using pzflex and the Khokhlov-Zabolotskaya-Kuznetsov Texas code

    PubMed Central

    Qiao, Shan; Jackson, Edward; Coussios, Constantin C.; Cleveland, Robin O.

    2016-01-01

    Nonlinear acoustics plays an important role in both diagnostic and therapeutic applications of biomedical ultrasound and a number of research and commercial software packages are available. In this manuscript, predictions of two solvers available in a commercial software package, pzflex, one using the finite-element-method (FEM) and the other a pseudo-spectral method, spectralflex, are compared with measurements and the Khokhlov-Zabolotskaya-Kuznetsov (KZK) Texas code (a finite-difference time-domain algorithm). The pzflex methods solve the continuity equation, momentum equation and equation of state where they account for nonlinearity to second order whereas the KZK code solves a nonlinear wave equation with a paraxial approximation for diffraction. Measurements of the field from a single element 3.3 MHz focused transducer were compared with the simulations and there was good agreement for the fundamental frequency and the harmonics; however the FEM pzflex solver incurred a high computational cost to achieve equivalent accuracy. In addition, pzflex results exhibited non-physical oscillations in the spatial distribution of harmonics when the amplitudes were relatively low. It was found that spectralflex was able to accurately capture the nonlinear fields at reasonable computational cost. These results emphasize the need to benchmark nonlinear simulations before using codes as predictive tools. PMID:27914432

  1. A pseudospectra-based approach to non-normal stability of embedded boundary methods

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha; Samtaney, Ravi

    2017-11-01

    We present non-normal linear stability of embedded boundary (EB) methods employing pseudospectra and resolvent norms. Stability of the discrete linear wave equation is characterized in terms of the normalized distance of the EB to the nearest ghost node (α) in one and two dimensions. An important objective is that the CFL condition based on the Cartesian grid spacing remains unaffected by the EB. We consider various discretization methods including both central and upwind-biased schemes. Stability is guaranteed when α <=αmax ranges between 0.5 and 0.77 depending on the discretization scheme. Also, the stability characteristics remain the same in both one and two dimensions. Sharper limits on the sufficient conditions for stability are obtained based on the pseudospectral radius (the Kreiss constant) than the restrictive limits based on the usual singular value decomposition analysis. We present a simple and robust reclassification scheme for the ghost cells (``hybrid ghost cells'') to ensure Lax stability of the discrete systems. This has been tested successfully for both low and high order discretization schemes with transient growth of at most O (1). Moreover, we present a stable, fourth order EB reconstruction scheme. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01.

  2. Simulation of nonlinear propagation of biomedical ultrasound using pzflex and the Khokhlov-Zabolotskaya-Kuznetsov Texas code.

    PubMed

    Qiao, Shan; Jackson, Edward; Coussios, Constantin C; Cleveland, Robin O

    2016-09-01

    Nonlinear acoustics plays an important role in both diagnostic and therapeutic applications of biomedical ultrasound and a number of research and commercial software packages are available. In this manuscript, predictions of two solvers available in a commercial software package, pzflex, one using the finite-element-method (FEM) and the other a pseudo-spectral method, spectralflex, are compared with measurements and the Khokhlov-Zabolotskaya-Kuznetsov (KZK) Texas code (a finite-difference time-domain algorithm). The pzflex methods solve the continuity equation, momentum equation and equation of state where they account for nonlinearity to second order whereas the KZK code solves a nonlinear wave equation with a paraxial approximation for diffraction. Measurements of the field from a single element 3.3 MHz focused transducer were compared with the simulations and there was good agreement for the fundamental frequency and the harmonics; however the FEM pzflex solver incurred a high computational cost to achieve equivalent accuracy. In addition, pzflex results exhibited non-physical oscillations in the spatial distribution of harmonics when the amplitudes were relatively low. It was found that spectralflex was able to accurately capture the nonlinear fields at reasonable computational cost. These results emphasize the need to benchmark nonlinear simulations before using codes as predictive tools.

  3. Spectral algorithms for multiple scale localized eigenfunctions in infinitely long, slightly bent quantum waveguides

    NASA Astrophysics Data System (ADS)

    Boyd, John P.; Amore, Paolo; Fernández, Francisco M.

    2018-03-01

    A "bent waveguide" in the sense used here is a small perturbation of a two-dimensional rectangular strip which is infinitely long in the down-channel direction and has a finite, constant width in the cross-channel coordinate. The goal is to calculate the smallest ("ground state") eigenvalue of the stationary Schrödinger equation which here is a two-dimensional Helmholtz equation, ψxx +ψyy + Eψ = 0 where E is the eigenvalue and homogeneous Dirichlet boundary conditions are imposed on the walls of the waveguide. Perturbation theory gives a good description when the "bending strength" parameter ɛ is small as described in our previous article (Amore et al., 2017) and other works cited therein. However, such series are asymptotic, and it is often impractical to calculate more than a handful of terms. It is therefore useful to develop numerical methods for the perturbed strip to cover intermediate ɛ where the perturbation series may be inaccurate and also to check the pertubation expansion when ɛ is small. The perturbation-induced change-in-eigenvalue, δ ≡ E(ɛ) - E(0) , is O(ɛ2) . We show that the computation becomes very challenging as ɛ → 0 because (i) the ground state eigenfunction varies on both O(1) and O(1 / ɛ) length scales and (ii) high accuracy is needed to compute several correct digits in δ, which is itself small compared to the eigenvalue E. The multiple length scales are not geographically separate, but rather are inextricably commingled in the neighborhood of the boundary deformation. We show that coordinate mapping and immersed boundary strategies both reduce the computational domain to the uniform strip, allowing application of pseudospectral methods on tensor product grids with tensor product basis functions. We compared different basis sets; Chebyshev polynomials are best in the cross-channel direction. However, sine functions generate rather accurate analytical approximations with just a single basis function. In the down-channel coordinate, X ∈ [ - ∞ , ∞ ] , Fourier domain truncation using the change of coordinate X = sinh(Lt) is considerably more efficient than rational Chebyshev functions TBn(X ; L) . All the spectral methods, however, yielded the required accuracy on a desktop computer.

  4. Viriato: a Fourier-Hermite spectral code for strongly magnetised fluid-kinetic plasma dynamics

    NASA Astrophysics Data System (ADS)

    Loureiro, Nuno; Dorland, William; Fazendeiro, Luis; Kanekar, Anjor; Mallet, Alfred; Zocco, Alessandro

    2015-11-01

    We report on the algorithms and numerical methods used in Viriato, a novel fluid-kinetic code that solves two distinct sets of equations: (i) the Kinetic Reduced Electron Heating Model equations [Zocco & Schekochihin, 2011] and (ii) the kinetic reduced MHD (KRMHD) equations [Schekochihin et al., 2009]. Two main applications of these equations are magnetised (Alfvnénic) plasma turbulence and magnetic reconnection. Viriato uses operator splitting to separate the dynamics parallel and perpendicular to the ambient magnetic field (assumed strong). Along the magnetic field, Viriato allows for either a second-order accurate MacCormack method or, for higher accuracy, a spectral-like scheme. Perpendicular to the field Viriato is pseudo-spectral, and the time integration is performed by means of an iterative predictor-corrector scheme. In addition, a distinctive feature of Viriato is its spectral representation of the parallel velocity-space dependence, achieved by means of a Hermite representation of the perturbed distribution function. A series of linear and nonlinear benchmarks and tests are presented, with focus on 3D decaying kinetic turbulence. Work partially supported by Fundação para a Ciência e Tecnologia via Grants UID/FIS/50010/2013 and IF/00530/2013.

  5. High-performance modeling of plasma-based acceleration and laser-plasma interactions

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Blaclard, Guillaume; Godfrey, Brendan; Kirchen, Manuel; Lee, Patrick; Lehe, Remi; Lobet, Mathieu; Vincenti, Henri

    2016-10-01

    Large-scale numerical simulations are essential to the design of plasma-based accelerators and laser-plasma interations for ultra-high intensity (UHI) physics. The electromagnetic Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations, as it is based on first principles, and captures all kinetic effects, and also scale favorably to many cores on supercomputers. The standard PIC algorithm relies on second-order finite-difference discretization of the Maxwell and Newton-Lorentz equations. We present here novel formulations, based on very high-order pseudo-spectral Maxwell solvers, which enable near-total elimination of the numerical Cherenkov instability and increased accuracy over the standard PIC method for standard laboratory frame and Lorentz boosted frame simulations. We also present the latest implementations in the PIC modules Warp-PICSAR and FBPIC on the Intel Xeon Phi and GPU architectures. Examples of applications will be given on the simulation of laser-plasma accelerators and high-harmonic generation with plasma mirrors. Work supported by US-DOE Contracts DE-AC02-05CH11231 and by the European Commission through the Marie Slowdoska-Curie fellowship PICSSAR Grant Number 624543. Used resources of NERSC.

  6. A phase-field method to analyze the dynamics of immiscible fluids in porous media

    NASA Astrophysics Data System (ADS)

    de Paoli, Marco; Roccon, Alessio; Zonta, Francesco; Soldati, Alfredo

    2017-11-01

    Liquid carbon dioxide (CO2) injected into geological formations (filled with brine) is not completely soluble in the surrounding fluid. For this reason, complex transport phenomena may occur across the interface that separates the two phases (CO2+brine and brine). Inspired by this geophysical instance, we used a Phase-Field Method (PFM) to describe the dynamics of two immiscible fluids in satured porous media. The basic idea of the PFM is to introduce an order parameter (ϕ) that varies continuously across the interfacial layer between the phases and is uniform in the bulk. The equation that describes the distribution of ϕ is the Cahn-Hilliard (CH) equation, which is coupled with the Darcy equation (to evaluate fluid velocity) through the buoyancy and Korteweg stress terms. The governing equations are solved through a pseudo-spectral technique (Fourier-Chebyshev). Our results show that the value of the surface tension between the two phases strongly influences the initial and the long term dynamics of the system. We believe that the proposed numerical approach, which grants an accurate evaluation of the interfacial fluxes of momentum/energy/species, is attractive to describe the transfer mechanism and the overall dynamics of immiscible and partially miscible phases.

  7. Numerical simulation of wave-induced fluid flow seismic attenuation based on the Cole-Cole model.

    PubMed

    Picotti, Stefano; Carcione, José M

    2017-07-01

    The acoustic behavior of porous media can be simulated more realistically using a stress-strain relation based on the Cole-Cole model. In particular, seismic velocity dispersion and attenuation in porous rocks is well described by mesoscopic-loss models. Using the Zener model to simulate wave propagation is a rough approximation, while the Cole-Cole model provides an optimal description of the physics. Here, a time-domain algorithm is proposed based on the Grünwald-Letnikov numerical approximation of the fractional derivative involved in the time-domain representation of the Cole-Cole model, while the spatial derivatives are computed with the Fourier pseudospectral method. The numerical solution is successfully tested against an analytical solution. The methodology is applied to a model of saline aquifer, where carbon dioxide (CO 2 ) is injected. To follow the migration of the gas and detect possible leakages, seismic monitoring surveys should be carried out periodically. To this aim, the sensitivity of the seismic method must be carefully assessed for the specific case. The simulated test considers a possible leakage in the overburden, above the caprock, where the sandstone is partially saturated with gas and brine. The numerical examples illustrate the implementation of the theory.

  8. Application of adaptive gridding to magnetohydrodynamic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lotatti, I.; Satyanarayana, P.

    1996-12-31

    The numerical simulation of the primitive, three-dimensional, time-dependent, resistive MHD equations on an unstructured, adaptive poloidal mesh using the TRIM code has been reported previously. The toroidal coordinate is approximated pseudo-spectrally with finite Fourier series and Fast-Fourier Transforms. The finite-volume algorithm preserves the magnetic field as solenoidal to round-off error, and also conserves mass, energy, and magnetic flux exactly. A semi-implicit method is used to allow for large time steps on the unstructured mesh. This is important for tokamak calculations where the relevant time scale is determined by the poloidal Alfven time. This also allows the viscosity to be treatedmore » implicitly. A conjugate-gradient method with pre-conditioning is used for matrix inversion. Applications to the growth and saturation of ideal instabilities in several toroidal fusion systems has been demonstrated. Recently we have concentrated on the details of the mesh adaption algorithm used in TRIM. We present several two-dimensional results relating to the use of grid adaptivity to track the evolution of hydrodynamic and MHD structures. Examples of plasma guns, opening switches, and supersonic flow over a magnetized sphere are presented. Issues relating to mesh adaption criteria are discussed.« less

  9. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  10. Self-consistent field theory simulations of polymers on arbitrary domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouaknin, Gaddiel, E-mail: gaddielouaknin@umail.ucsb.edu; Laachi, Nabil; Delaney, Kris

    2016-12-15

    We introduce a framework for simulating the mesoscale self-assembly of block copolymers in arbitrary confined geometries subject to Neumann boundary conditions. We employ a hybrid finite difference/volume approach to discretize the mean-field equations on an irregular domain represented implicitly by a level-set function. The numerical treatment of the Neumann boundary conditions is sharp, i.e. it avoids an artificial smearing in the irregular domain boundary. This strategy enables the study of self-assembly in confined domains and enables the computation of physically meaningful quantities at the domain interface. In addition, we employ adaptive grids encoded with Quad-/Oc-trees in parallel to automatically refinemore » the grid where the statistical fields vary rapidly as well as at the boundary of the confined domain. This approach results in a significant reduction in the number of degrees of freedom and makes the simulations in arbitrary domains using effective boundary conditions computationally efficient in terms of both speed and memory requirement. Finally, in the case of regular periodic domains, where pseudo-spectral approaches are superior to finite differences in terms of CPU time and accuracy, we use the adaptive strategy to store chain propagators, reducing the memory footprint without loss of accuracy in computed physical observables.« less

  11. GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Duboscq, Romain

    2015-08-01

    GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.

  12. Numerical investigation of the entrainment and mixing processes in neutral and stably-stratified mixing layers

    NASA Astrophysics Data System (ADS)

    Cortesi, A. B.; Smith, B. L.; Yadigaroglu, G.; Banerjee, S.

    1999-01-01

    The direct numerical simulation (DNS) of a temporally-growing mixing layer has been carried out, for a variety of initial conditions at various Richardson and Prandtl numbers, by means of a pseudo-spectral technique; the main objective being to elucidate how the entrainment and mixing processes in mixing-layer turbulence are altered under the combined influence of stable stratification and thermal conductivity. Stratification is seen to significantly modify the way by which entrainment and mixing occur by introducing highly-localized, convective instabilities, which in turn cause a substantially different three-dimensionalization of the flow compared to the unstratified situation. Fluid which was able to cross the braid region mainly undisturbed (unmixed) in the unstratified case, pumped by the action of rib pairs and giving rise to well-formed mushroom structures, is not available with stratified flow. This is because of the large number of ribs which efficiently mix the fluid crossing the braid region. More efficient entrainment and mixing has been noticed for high Prandtl number computations, where vorticity is significantly reinforced by the baroclinic torque. In liquid sodium, however, for which the Prandtl number is very low, the generation of vorticity is very effectively suppressed by the large thermal conduction, since only small temperature gradients, and thus negligible baroclinic vorticity reinforcement, are then available to counterbalance the effects of buoyancy. This is then reflected in less efficient entrainment and mixing. The influence of the stratification and the thermal conductivity can also be clearly identified from the calculated entrainment coefficients and turbulent Prandtl numbers, which were seen to accurately match experimental data. The turbulent Prandtl number increases rapidly with increasing stratification in liquid sodium, whereas for air and water the stratification effect is less significant. A general law for the entrainment coefficient as a function of the Richardson and Prandtl numbers is proposed, and critically assessed against experimental data.

  13. Local dynamic subgrid-scale models in channel flow

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1994-01-01

    The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.

  14. Enhancing Cloud Radiative Processes and Radiation Efficiency in the Advanced Research Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iacono, Michael J.

    The objective of this research has been to evaluate and implement enhancements to the computational performance of the RRTMG radiative transfer option in the Advanced Research version of the Weather Research and Forecasting (WRF) model. Efficiency is as essential as accuracy for effective numerical weather prediction, and radiative transfer is a relatively time-consuming component of dynamical models, taking up to 30-50 percent of the total model simulation time. To address this concern, this research has implemented and tested a version of RRTMG that utilizes graphics processing unit (GPU) technology (hereinafter RRTMGPU) to greatly improve its computational performance; thereby permitting eithermore » more frequent simulation of radiative effects or other model enhancements. During the early stages of this project the development of RRTMGPU was completed at AER under separate NASA funding to accelerate the code for use in the Goddard Space Flight Center (GSFC) Goddard Earth Observing System GEOS-5 global model. It should be noted that this final report describes results related to the funded portion of the originally proposed work concerning the acceleration of RRTMG with GPUs in WRF. As a k-distribution model, RRTMG is especially well suited to this modification due to its relatively large internal pseudo-spectral (g-point) dimension that, when combined with the horizontal grid vector in the dynamical model, can take great advantage of the GPU capability. Thorough testing under several model configurations has been performed to ensure that RRTMGPU improves WRF model run time while having no significant impact on calculated radiative fluxes and heating rates or on dynamical model fields relative to the RRTMG radiation. The RRTMGPU codes have been provided to NCAR for possible application to the next public release of the WRF forecast model.« less

  15. Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code

    NASA Astrophysics Data System (ADS)

    Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2016-06-01

    We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.

  16. Quantification of mixing in vesicle suspensions using numerical simulations in two dimensions.

    PubMed

    Kabacaoğlu, G; Quaife, B; Biros, G

    2017-02-01

    We study mixing in Stokesian vesicle suspensions in two dimensions on a cylindrical Couette apparatus using numerical simulations. The vesicle flow simulation is done using a boundary integral method, and the advection-diffusion equation for the mixing of the solute is solved using a pseudo-spectral scheme. We study the effect of the area fraction, the viscosity contrast between the inside (the vesicles) and the outside (the bulk) fluid, the initial condition of the solute, and the mixing metric. We compare mixing in the suspension with mixing in the Couette apparatus without vesicles. On the one hand, the presence of vesicles in most cases slightly suppresses mixing. This is because the solute can be only diffused across the vesicle interface and not advected. On the other hand, there exist spatial distributions of the solute for which the unperturbed Couette flow completely fails to mix whereas the presence of vesicles enables mixing. We derive a simple condition that relates the velocity and solute and can be used to characterize the cases in which the presence of vesicles promotes mixing.

  17. Quantification of mixing in vesicle suspensions using numerical simulations in two dimensions

    PubMed Central

    Quaife, B.; Biros, G.

    2017-01-01

    We study mixing in Stokesian vesicle suspensions in two dimensions on a cylindrical Couette apparatus using numerical simulations. The vesicle flow simulation is done using a boundary integral method, and the advection-diffusion equation for the mixing of the solute is solved using a pseudo-spectral scheme. We study the effect of the area fraction, the viscosity contrast between the inside (the vesicles) and the outside (the bulk) fluid, the initial condition of the solute, and the mixing metric. We compare mixing in the suspension with mixing in the Couette apparatus without vesicles. On the one hand, the presence of vesicles in most cases slightly suppresses mixing. This is because the solute can be only diffused across the vesicle interface and not advected. On the other hand, there exist spatial distributions of the solute for which the unperturbed Couette flow completely fails to mix whereas the presence of vesicles enables mixing. We derive a simple condition that relates the velocity and solute and can be used to characterize the cases in which the presence of vesicles promotes mixing. PMID:28344432

  18. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar M.

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less

  19. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  20. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  1. Multielectron effects in the photoelectron momentum distribution of noble-gas atoms driven by visible-to-infrared-frequency laser pulses: A time-dependent density-functional-theory approach

    NASA Astrophysics Data System (ADS)

    Murakami, Mitsuko; Zhang, G. P.; Chu, Shih-I.

    2017-05-01

    We present the photoelectron momentum distributions (PMDs) of helium, neon, and argon atoms driven by a linearly polarized, visible (527-nm) or near-infrared (800-nm) laser pulse (20 optical cycles in duration) based on the time-dependent density-functional theory (TDDFT) under the local-density approximation with a self-interaction correction. A set of time-dependent Kohn-Sham equations for all electrons in an atom is numerically solved using the generalized pseudospectral method. An effect of the electron-electron interaction driven by a visible laser field is not recognizable in the helium and neon PMDs except for a reduction of the overall photoelectron yield, but there is a clear difference between the PMDs of an argon atom calculated with the frozen-core approximation and TDDFT, indicating an interference of its M -shell wave functions during the ionization. Furthermore, we find that the PMDs of degenerate p states are well separated in intensity when driven by a near-infrared laser field, so that the single-active-electron approximation can be adopted safely.

  2. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  3. The Effect of Strain Rate on the Evolution of Plane Wakes Subjected to Irrotational Strains

    NASA Technical Reports Server (NTRS)

    Rogers, Michael M.; Merriam, Marshal (Technical Monitor)

    1996-01-01

    Direct numerical simulations of time-evolving turbulent plane wakes developing in the presence of irrotational plane strain applied at three different strain rates have been generated. The strain geometry is such that the flow is compressed in the streamwise direction and expanded in the cross-stream direction with the spanwise direction being unstrained. This geometry is the temporally evolving analogue of a spatially evolving wake in an adverse pressure gradient. A pseudospectral numerical method with up to 16 million modes is used to solve the equations in a reference frame moving with the irrotational strain. The initial condition for each simulation is taken from a previous turbulent self-similar plane wake direct numerical simulation at a velocity deficit Reynolds number, Re, of about 2,000. Although the evolutions of many statistics are nearly collapsed when plotted against total strain, there are some differences owing to the different strain rate histories. The impact of strain-rate on the wake spreading rate, the peak velocity deficit, the Reynolds stress profiles, and the flow structure is examined.

  4. Numerical modelling of GPR electromagnetic fields for locating burial sites

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Karczewski, Jerzy; Mazurkiewicz, Ewelina; Tadeusiewicz, Ryszard; Tomecka-Suchoń, Sylwia

    2017-11-01

    Ground-penetrating radar (GPR) is commonly used for locating burial sites. In this article, we acquired radargrams at a site where a domestic pig cadaver was buried. The measurements were conducted with the ProEx System GPR manufactured by the Swedish company Mala Geoscience with an antenna of 500MHz. The event corresponding to the pig can be clearly seen in the measurements. In order to improve the interpretation, the electromagnetic field is compared to numerical simulations computed with the pseudo-spectral Fourier method. A geological model has been defined on the basis of assumed electromagnetic properties (permittivity, conductivity and magnetic permeability). The results, when compared with the GPR measurements, show a dissimilar amplitude behaviour, with a stronger reflection event from the bottom of the pit. We have therefore performed another simulation by decreasing the electrical conductivity of the body very close to that of air. The comparison improved, showing more reflections, which could be an indication that the body contains air or has been degraded to a certain extent that the electrical resistivity has greatly increased.

  5. Investigation of cellular detonation structure formation via linear stability theory and 2D and 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Kudryavtsev, A. N.

    2017-10-01

    Linear and nonlinear stages of the instability of a plane detonation wave (DW) and the subsequent process of formation of cellular detonation structure are investigated. A simple model with one-step irreversible chemical reaction is used. The linear analysis is employed to predict the DW front structure at the early stages of its formation. An emerging eigenvalue problem is solved with a global method using a Chebyshev pseudospectral method and the LAPACK software library. A local iterative shooting procedure is used for eigenvalue refinement. Numerical simulations of a propagation of a DW in plane and rectangular channels are performed with a shock capturing WENO scheme of 5th order. A special method of a computational domain shift is implemented in order to maintain the DW in the domain. It is shown that the linear analysis gives certain predictions about the DW structure that are in agreement with the numerical simulations of early stages of DW propagation. However, at later stages, a merger of detonation cells occurs so that their number is approximately halved. Computations of DW propagation in a square channel reveal two different types of spatial structure of the DW front, "rectangular" and "diagonal" types. A spontaneous transition from the rectangular to diagonal type of structure is observed during propagation of the DW.

  6. Effects of Density Stratification in Compressible Polytropic Convection

    NASA Astrophysics Data System (ADS)

    Manduca, Cathryn M.; Anders, Evan H.; Bordwell, Baylee; Brown, Benjamin P.; Burns, Keaton J.; Lecoanet, Daniel; Oishi, Jeffrey S.; Vasil, Geoffrey M.

    2017-11-01

    We study compressible convection in polytropically-stratified atmospheres, exploring the effect of varying the total density stratification. Using the Dedalus pseudospectral framework, we perform 2D and 3D simulations. In these experiments we vary the number of density scale heights, studying atmospheres with little stratification (1 density scale height) and significant stratification (5 density scale heights). We vary the level of convective driving (quantified by the Rayleigh number), and study flows at similar Mach numbers by fixing the initial superadiabaticity. We explore the differences between 2D and 3D simulations, and in particular study the equilibration between different reservoirs of energy (kinetic, potential and internal) in the evolved states.

  7. Moist, Double-diffusive convection

    NASA Astrophysics Data System (ADS)

    Oishi, Jeffrey; Burns, Keaton; Brown, Ben; Lecoanet, Daniel; Vasil, Geoffrey

    2017-11-01

    Double-diffusive convection occurs when the competition between stabilizing and a destabilizing buoyancy source is mediated by a difference in the diffusivity of each source. Such convection is important in a wide variety of astrophysical and geophysical flows. However, in giant planets, double-diffusive convection occurs in regions where condensation of important components of the atmosphere occurs. Here, we present preliminary calculations of moist, double-diffusive convection using the Dedalus pseudospectral framework. Using a simple model for phase change, we verify growth rates for moist double diffusive convection from linear calculations and report on preliminary relationships between the ability to form liquid phase and the resulting Nusselt number in nonlinear simulations.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrés, Nahuel, E-mail: nandres@iafe.uba.ar; Gómez, Daniel; Departamento de Física, Facultad de Ciencias Exactas y Naturales, Univrsidad de Buenos Aires, Pabellón I, 1428, Buenos Aires

    We present a study of collisionless magnetic reconnection within the framework of full two-fluid MHD for a completely ionized hydrogen plasma, retaining the effects of the Hall current, electron pressure and electron inertia. We performed 2.5D simulations using a pseudo-spectral code with no dissipative effects. We check that the ideal invariants of the problem are conserved down to round-off errors. Our numerical results confirm that the change in the topology of the magnetic field lines is exclusively due to the presence of electron inertia. The computed reconnection rates remain a fair fraction of the Alfvén velocity, which therefore qualifies asmore » fast reconnection.« less

  9. Effects of turbulence on the collision rate of cloud droplets

    NASA Astrophysics Data System (ADS)

    Ayala, Orlando

    This dissertation concerns effects of air turbulence on the collision rate of atmospheric cloud droplets. This research was motivated by the speculation that air turbulence could enhance the collision rate thereby help transform cloud droplets to rain droplets in a short time as observed in nature. The air turbulence within clouds is assumed to be homogeneous and isotropic, and its small-scale motion (1 mm to 10 cm scales) is computationally generated by direct numerical integration of the full Navier-Stokes equations. Typical droplet and turbulence parameters of convective warm clouds are used to determine the Stokes numbers (St) and the nondimensional terminal velocities (Sv) which characterize droplet relative inertia and gravitational settling, respectively. A novel and efficient methodology for conducting direct numerical simulations (DNS) of hydrodynamically-interacting droplets in the context of cloud microphysics has been developed. This numerical approach solves the turbulent flow by the pseudo-spectral method with a large-scale forcing, and utilizes an improved superposition method to embed analytically the local, small-scale (10 mum to 1 mm) disturbance flows induced by the droplets. This hybrid representation of background turbulent air motion and the induced disturbance flows is then used to study the combined effects of hydrodynamic interactions and airflow turbulence on the motion and collisions of cloud droplets. Hybrid DNS results show that turbulence can increase the geometric collision kernel relative to the gravitational geometric kernel by as much as 42% due to enhanced radial relative motion and preferential concentration of droplets. The exact level of enhancements depends on the Taylor-microscale Reynolds number, turbulent dissipation rate, and droplet pair size ratio. One important finding is that turbulence has a relatively dominant effect on the collision process between droplets close in size as the gravitational collision mechanism diminishes. A theory was developed to predict the radial relative velocity between droplets at contact. The theory agrees with our DNS results to within 5% for cloud droplets with strong settling. In addition, an empirical model is developed to quantify the radial distribution function. (Abstract shortened by UMI.)

  10. Light scattering by hexagonal ice crystals with distributed inclusions

    NASA Astrophysics Data System (ADS)

    Panetta, R. Lee; Zhang, Jia-Ning; Bi, Lei; Yang, Ping; Tang, Guanlin

    2016-07-01

    Inclusions of air bubbles or soot particles have significant effects on the single-scattering properties of ice crystals, effects that in turn have significant impacts on the radiation budget of an atmosphere containing the crystals. This study investigates some of the single-scattering effects in the case of hexagonal ice crystals, including effects on the backscattering depolarization ratio, a quantity of practical importance in the interpretation of lidar observations. One distinguishing feature of the study is an investigation of scattering properties at a visible wavelength for a crystal with size parameter (x) above 100, a size regime where one expects some agreement between exact methods and geometrical optics methods. This expectation is generally borne out in a test comparison of how the sensitivity of scattering properties to the distribution of a given volume fraction of included air is represented using (i) an approximate Monte Carlo Ray Tracing (MCRT) method and (ii) a numerically exact pseudo-spectral time-domain (PSTD) method. Another distinguishing feature of the study is a close examination, using the numerically exact Invariant-Imbedding T-Matrix (II-TM) method, of how some optical properties of importance to satellite remote sensing vary as the volume fraction of inclusions and size of crystal are varied. Although such an investigation of properties in the x>100 regime faces serious computational burdens that force a large number of idealizations and simplifications in the study, the results nevertheless provide an intriguing glimpse of what is evidently a quite complex sensitivity of optical scattering properties to inclusions of air or soot as volume fraction and size parameter are varied.

  11. A Lattice-Boltzmann model to simulate diffractive nonlinear ultrasound beam propagation in a dissipative fluid medium

    NASA Astrophysics Data System (ADS)

    Abdi, Mohamad; Hajihasani, Mojtaba; Gharibzadeh, Shahriar; Tavakkoli, Jahan

    2012-12-01

    Ultrasound waves have been widely used in diagnostic and therapeutic medical applications. Accurate and effective simulation of ultrasound beam propagation and its interaction with tissue has been proved to be important. The nonlinear nature of the ultrasound beam propagation, especially in the therapeutic regime, plays an important role in the mechanisms of interaction with tissue. There are three main approaches in current computational fluid dynamics (CFD) methods to model and simulate nonlinear ultrasound beams: macroscopic, mesoscopic and microscopic approaches. In this work, a mesoscopic CFD method based on the Lattice-Boltzmann model (LBM) was investigated. In the developed method, the Boltzmann equation is evolved to simulate the flow of a Newtonian fluid with the collision model instead of solving the Navier-Stokes, continuity and state equations which are used in conventional CFD methods. The LBM has some prominent advantages over conventional CFD methods, including: (1) its parallel computational nature; (2) taking microscopic boundaries into account; and (3) capability of simulating in porous and inhomogeneous media. In our proposed method, the propagating medium is discretized with a square grid in 2 dimensions with 9 velocity vectors for each node. Using the developed model, the nonlinear distortion and shock front development of a finiteamplitude diffractive ultrasonic beam in a dissipative fluid medium was computed and validated against the published data. The results confirm that the LBM is an accurate and effective approach to model and simulate nonlinearity in finite-amplitude ultrasound beams with Mach numbers of up to 0.01 which, among others, falls within the range of therapeutic ultrasound regime such as high intensity focused ultrasound (HIFU) beams. A comparison between the HIFU nonlinear beam simulations using the proposed model and pseudospectral methods in a 2D geometry is presented.

  12. Dynamic Modeling of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Truitt, J. L.; Forest, C. B.; Wright, J. C.

    1999-11-01

    This work focuses on a computer simulation of the Magnetohydrodynamic equations applied in the geometry of the Madison Dynamo Experiemnt. An integration code is used to evolve both the magnetic field and the velocity field numerically in spherical coordinates using a pseudo-spectral algorithm. The focus is to realistically model an experiment to be undertaken by the Madison Dynamo Experiment Group. The first flows studied are the well documented ones of Dudley and James. The main goals of the simulation are to observe the dynamo effect with the back-reaction allowed, to observe the equipartition of magnetic and kinetic energy due to theoretically proposed turbulent effects, and to isolate and study the α and β effects.

  13. Simulations of thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model

    NASA Astrophysics Data System (ADS)

    Köllner, Thomas; Boeck, Thomas; Schumacher, Jörg

    2017-11-01

    Operating a liquid-metal battery produces Ohmic losses in the electrolyte layer that separates both metal electrodes. As a consequence, temperature gradients establish which potentially cause thermal convection since density and interfacial tension depend on the local temperature. In our numerical investigations, we considered three plane, immiscible layers governed by the Navier-Stokes-Boussinesq equations held at a constant temperature of 500°C at the bottom and top. A homogeneous current is applied that leads to a preferential heating of the mid electrolyte layer. We chose a typical material combination of Li separated by LiCl-KCl (a molten salt) from Pb-Bi for which we analyzed the linear stability of pure thermal conduction and performed three-dimensional direct-numerical simulations by a pseudospectral method probing different: electrolyte layer heights, overall heights, and current densities. Four instability mechanisms are identified, which are partly coupled to each other: buoyant convection in the upper electrode, buoyant convection in the molten salt layer, and Marangoni convection at both interfaces between molten salt and electrode. The global turbulent heat transfer follows scaling predictions for internally heated buoyant convection. Financial support by the Deutsche Forschungsgemeinschaft under Grant No. KO 5515/1-1 is gratefully acknowledged.

  14. Direct numerical simulation of incompressible acceleration-driven variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Gat, Ilana; Matheou, Georgios; Chung, Daniel; Dimotakis, Paul

    2015-11-01

    Fully developed turbulence in variable-density flow driven by an externally imposed acceleration field, e.g., gravity, is fundamental in many applications, such as inertial confinement fusion, geophysics, and astrophysics. Aspects of this turbulence regime are poorly understood and are of interest to fluid modeling. We investigate incompressible acceleration-driven variable-density turbulence by a series of direct numerical simulations of high-density fluid in-between slabs of low-density fluid, in a triply-periodic domain. A pseudo-spectral numerical method with a Helmholtz-Hodge decomposition of the pressure field, which ensures mass conservation, is employed, as documented in Chung & Pullin (2010). A uniform dynamic viscosity and local Schmidt number of unity are assumed. This configuration encapsulates a combination of flow phenomena in a temporally evolving variable-density shear flow. Density ratios up to 10 and Reynolds numbers in the fully developed turbulent regime are investigated. The temporal evolution of the vertical velocity difference across the shear layer, shear-layer growth, mean density, and Reynolds number are discussed. Statistics of Lagrangian accelerations of fluid elements and of vorticity as a function of the density ratio are also presented. This material is based upon work supported by the AFOSR, the DOE, the NSF GRFP, and Caltech.

  15. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, S. C.; Rojas, R.; Perevalov, A.; Lathrop, D. P.

    2017-12-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108. In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere [Schaeffer, N. G3 (2013)]. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  16. Semi-Numerical Studies of the Three-Meter Spherical Couette Experiment Utilizing Data Assimilation

    NASA Astrophysics Data System (ADS)

    Burnett, Sarah; Rojas, Ruben; Perevalov, Artur; Lathrop, Daniel; Ide, Kayo; Schaeffer, Nathanael

    2017-11-01

    The model of the Earth's magnetic field has been investigated in recent years through experiments and numerical models. At the University of Maryland, experimental studies are implemented in a three-meter spherical Couette device filled with liquid sodium. The inner and outer spheres of this apparatus mimic the planet's inner core and core-mantle boundary, respectively. These experiments incorporate high velocity flows with Reynolds numbers 108 . In spherical Couette geometry, the numerical scheme applied to this work features finite difference methods in the radial direction and pseudospectral spherical harmonic transforms elsewhere. Adding to the numerical model, data assimilation integrates the experimental outer-layer magnetic field measurements. This semi-numerical model can then be compared to the experimental results as well as forecasting magnetic field changes. Data assimilation makes it possible to get estimates of internal motions of the three-meter experiment that would otherwise be intrusive or impossible to obtain in experiments or too computationally expensive with a purely numerical code. If we can provide accurate models of the three-meter device, it is possible to attempt to model the geomagnetic field. We gratefully acknowledge the support of NSF Grant No. EAR1417148 & DGE1322106.

  17. Onset of natural convection in a continuously perturbed system

    NASA Astrophysics Data System (ADS)

    Ghorbani, Zohreh; Riaz, Amir

    2017-11-01

    The convective mixing triggered by gravitational instability plays an important role in CO2 sequestration in saline aquifers. The linear stability analysis and the numerical simulation concerning convective mixing in porous media requires perturbations of small amplitude to be imposed on the concentration field in the form of an initial shape function. In aquifers, however, the instability is triggered by local porosity and permeability. In this work, we consider a canonical 2D homogeneous system where perturbations arise due to spatial variation of porosity in the system. The advantage of this approach is not only the elimination of the required initial shape function, but it also serves as a more realistic approach. Using a reduced nonlinear method, we first explore the effect of harmonic variations of porosity in the transverse and streamwise direction on the onset time of convection and late time behavior. We then obtain the optimal porosity structure that minimizes the convection onset. We further examine the effect of a random porosity distribution, that is independent of the spatial mode of porosity structure, on the convection onset. Using high-order pseudospectral DNS, we explore how the random distribution differs from the modal approach in predicting the onset time.

  18. Coupling mesodomain positional ordering to intra-domain orientational ordering in block copolymer assembly

    NASA Astrophysics Data System (ADS)

    Burke, Christopher; Reddy, Abhiram; Prasad, Ishan; Grason, Gregory

    Block copolymer (BCP) melts form a number of symmetric microphases, e.g. columnar or double gyroid phases. BCPs with a block composed of chiral monomers are observed to form bulk phases with broken chiral symmetry e.g. a phase of hexagonally ordered helical mesodomains. Other new structures may be possible, e.g. double gyroid with preferred chirality which has potential photonic applications. One approach to understanding chirality transfer from monomer to the bulk is to use self consistent field theory (SCFT) and incorporate an orientational order parameter with a preference for handed twist in chiral block segments, much like the texture of cholesteric liquid crystal. Polymer chains in achiral BCPs exhibit orientational ordering which couples to the microphase geometry; a spontaneous preference for ordering may have an effect on the geometry. The influence of a preference for chiral polar (vectorial) segment order has been studied to some extent, though the influence of coupling to chiral tensorial (nematic) order has not yet been developed. We present a computational approach using SCFT with vector and tensor order which employs well developed pseudo-spectral methods. Using this we explore how tensor order influences which structures form, and if it can promote chiral phases.

  19. Geometrical effects on western intensification of wind-driven ocean currents: The rotated-channel Stommel model, coastal orientation, and curvature

    NASA Astrophysics Data System (ADS)

    Boyd, John P.; Sanjaya, Edwin

    2014-03-01

    We revisit early models of steady western boundary currents [Gulf Stream, Kuroshio, etc.] to explore the role of irregular coastlines on jets, both to advance the research frontier and to illuminate for education. In the framework of a steady-state, quasigeostrophic model with viscosity, bottom friction and nonlinearity, we prove that rotating a straight coastline, initially parallel to the meridians, significantly thickens the western boundary layer. We analyze an infinitely long, straight channel with arbitrary orientation and bottom friction using an exact solution and singular perturbation theory, and show that the model, though simpler than Stommel's, nevertheless captures both the western boundary jet (“Gulf Stream”) and the “orientation effect”. In the rest of the article, we restrict attention to the Stommel flow (that is, linear and inviscid except for bottom friction) and apply matched asymptotic expansions, radial basis function, Fourier-Chebyshev and Chebyshev-Chebyshev pseudospectral methods to explore the effects of coastal geometry in a variety of non-rectangular domains bounded by a circle, parabolas and squircles. Although our oceans are unabashedly idealized, the narrow spikes, broad jets and stationary points vividly illustrate the power and complexity of coastal control of western boundary layers.

  20. Extreme multiplicity in cylindrical Rayleigh-Bénard convection. II. Bifurcation diagram and symmetry classification

    NASA Astrophysics Data System (ADS)

    Borońska, Katarzyna; Tuckerman, Laurette S.

    2010-03-01

    A large number of flows with distinctive patterns have been observed in experiments and simulations of Rayleigh-Bénard convection in a water-filled cylinder whose radius is twice the height. We have adapted a time-dependent pseudospectral code, first, to carry out Newton’s method and branch continuation and, second, to carry out the exponential power method and Arnoldi iteration to calculate leading eigenpairs and determine the stability of the steady states. The resulting bifurcation diagram represents a compromise between the tendency in the bulk toward parallel rolls and the requirement imposed by the boundary conditions that primary bifurcations be toward states whose azimuthal dependence is trigonometric. The diagram contains 17 branches of stable and unstable steady states. These can be classified geometrically as roll states containing two, three, and four rolls; axisymmetric patterns with one or two tori; threefold-symmetric patterns called Mercedes, Mitsubishi, marigold, and cloverleaf; trigonometric patterns called dipole and pizza; and less symmetric patterns called CO and asymmetric three rolls. The convective branches are connected to the conductive state and to each other by 16 primary and secondary pitchfork bifurcations and turning points. In order to better understand this complicated bifurcation diagram, we have partitioned it according to azimuthal symmetry. We have been able to determine the bifurcation-theoretic origin from the conductive state of all the branches observed at high Rayleigh number.

  1. Safe landing area determination for a Moon lander by reachability analysis

    NASA Astrophysics Data System (ADS)

    Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco

    2016-11-01

    In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.

  2. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2007-10-15

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less

  3. Route to chaos in porous-medium thermal convection

    NASA Astrophysics Data System (ADS)

    Kimura, S.; Schubert, G.; Straus, J. M.

    1986-05-01

    The transition to chaos in two-dimensional single-cell time-dependent convection in a square cross section of porous material saturated with fluid and heated from below is investigated theoretically by means of pseudospectral numerical simulations. The results are presented graphically and discussed in terms of the time-averaged Nusselt number, the oscillation mechanism, and similarities to Hele-Shaw convection. As the Rayleigh number (R) increases, the system is found to proceed from the steady state to a simply periodic state, a quasi-periodic state with two basic frequencies, a second simply periodic state, and finally to chaos. The transitions occur at R = 4 pi squared, 380-400, 500-520, 560-570, and 850-1000. The intermediate and chaotic regimes are characterized in detail.

  4. Trajectory Optimization: OTIS 4

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.; Falck, Robert D.; Paris, Stephen W.

    2010-01-01

    The latest release of the Optimal Trajectories by Implicit Simulation (OTIS4) allows users to simulate and optimize aerospace vehicle trajectories. With OTIS4, one can seamlessly generate optimal trajectories and parametric vehicle designs simultaneously. New features also allow OTIS4 to solve non-aerospace continuous time optimal control problems. The inputs and outputs of OTIS4 have been updated extensively from previous versions. Inputs now make use of objectoriented constructs, including one called a metastring. Metastrings use a greatly improved calculator and common nomenclature to reduce the user s workload. They allow for more flexibility in specifying vehicle physical models, boundary conditions, and path constraints. The OTIS4 calculator supports common mathematical functions, Boolean operations, and conditional statements. This allows users to define their own variables for use as outputs, constraints, or objective functions. The user-defined outputs can directly interface with other programs, such as spreadsheets, plotting packages, and visualization programs. Internally, OTIS4 has more explicit and implicit integration procedures, including high-order collocation methods, the pseudo-spectral method, and several variations of multiple shooting. Users may switch easily between the various methods. Several unique numerical techniques such as automated variable scaling and implicit integration grid refinement, support the integration methods. OTIS4 is also significantly more user friendly than previous versions. The installation process is nearly identical on various platforms, including Microsoft Windows, Apple OS X, and Linux operating systems. Cross-platform scripts also help make the execution of OTIS and post-processing of data easier. OTIS4 is supplied free by NASA and is subject to ITAR (International Traffic in Arms Regulations) restrictions. Users must have a Fortran compiler, and a Python interpreter is highly recommended.

  5. Optimal spacecraft formation establishment and reconfiguration propelled by the geomagnetic Lorentz force

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2014-12-01

    The Lorentz force acting on an electrostatically charged spacecraft as it moves through the planetary magnetic field could be utilized as propellantless electromagnetic propulsion for orbital maneuvering, such as spacecraft formation establishment and formation reconfiguration. By assuming that the Earth's magnetic field could be modeled as a tilted dipole located at the center of Earth that corotates with Earth, a dynamical model that describes the relative orbital motion of Lorentz spacecraft is developed. Based on the proposed dynamical model, the energy-optimal open-loop trajectories of control inputs, namely, the required specific charges of Lorentz spacecraft, for Lorentz-propelled spacecraft formation establishment or reconfiguration problems with both fixed and free final conditions constraints are derived via Gauss pseudospectral method. The effect of the magnetic dipole tilt angle on the optimal control inputs and the relative transfer trajectories for formation establishment or reconfiguration is also investigated by comparisons with the results derived from a nontilted dipole model. Furthermore, a closed-loop integral sliding mode controller is designed to guarantee the trajectory tracking in the presence of external disturbances and modeling errors. The stability of the closed-loop system is proved by a Lyapunov-based approach. Numerical simulations are presented to verify the validity of the proposed open-loop control methods and demonstrate the performance of the closed-loop controller. Also, the results indicate the dipole tilt angle should be considered when designing control strategies for Lorentz-propelled spacecraft formation establishment or reconfiguration.

  6. Stabilization of Taylor-Couette flow due to time-periodic outer cylinder oscillation

    NASA Technical Reports Server (NTRS)

    Murray, B. T.; Mcfadden, G. B.; Coriell, S. R.

    1990-01-01

    The linear stability of circular Couette flow between concentric infinite cylinders is considered for the case when the inner cylinder is rotated at a constant angular velocity and the outer cylinder is driven sinusoidally in time with zero mean rotation. This configuration was studied experimentally by Walsh and Donnelly. The critical Reynolds numbers calculated from linear stability theory agree well with the experimental values, except at large modulation amplitudes and small frequencies. The theoretical values are obtained using Floquet theory implemented in two distinct approaches: a truncated Fourier series representation in time, and a fundamental solution matrix based on a Chebyshev pseudospectral representation in space. For large amplitude, low frequency modulation, the linear eigenfunctions are temporally complex, consisting of a quiescent region followed by rapid change in the perturbed flow velocities.

  7. Theory and simulation of time-fractional fluid diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Sanchez-Sesma, Francisco J.; Luzón, Francisco; Perez Gavilán, Juan J.

    2013-08-01

    We simulate a fluid flow in inhomogeneous anisotropic porous media using a time-fractional diffusion equation and the staggered Fourier pseudospectral method to compute the spatial derivatives. A fractional derivative of the order of 0 < ν < 2 replaces the first-order time derivative in the classical diffusion equation. It implies a time-dependent permeability tensor having a power-law time dependence, which describes memory effects and accounts for anomalous diffusion. We provide a complete analysis of the physics based on plane waves. The concepts of phase, group and energy velocities are analyzed to describe the location of the diffusion front, and the attenuation and quality factors are obtained to quantify the amplitude decay. We also obtain the frequency-domain Green function. The time derivative is computed with the Grünwald-Letnikov summation, which is a finite-difference generalization of the standard finite-difference operator to derivatives of fractional order. The results match the analytical solution obtained from the Green function. An example of the pressure field generated by a fluid injection in a heterogeneous sandstone illustrates the performance of the algorithm for different values of ν. The calculation requires storing the whole pressure field in the computer memory since anomalous diffusion ‘recalls the past’.

  8. A flatfile of ground motion intensity measurements from induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Rennolet, Steven B.; Moschetti, Morgan P.; Thompson, Eric M.; Yeck, William

    2018-01-01

    We have produced a uniformly processed database of orientation-independent (RotD50, RotD100) ground motion intensity measurements containing peak horizontal ground motions (accelerations and velocities) and 5-percent-damped pseudospectral accelerations (0.1–10 s) from more than 3,800 M ≥ 3 earthquakes in Oklahoma and Kansas that occurred between January 2009 and December 2016. Ground motion time series were collected from regional, national, and temporary seismic arrays out to 500 km. We relocated the majority of the earthquake hypocenters using a multiple-event relocation algorithm to produce a set of near-uniformly processed hypocentral locations. Ground motion processing followed standard methods, with the primary objective of reducing the effects of noise on the measurements. Regional wave-propagation features and the high seismicity rate required careful selection of signal windows to ensure that we captured the entire ground motion record and that contaminating signals from extraneous earthquakes did not contribute to the database. Processing was carried out with an automated scheme and resulted in a database comprising more than 174,000 records (https://dx.doi.org/10.5066/F73B5X8N). We anticipate that these results will be useful for improved understanding of earthquake ground motions and for seismic hazard applications.

  9. Light Scattering by Ice Crystals Containing Air Bubbles

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Panetta, R. L.; Yang, P.; Bi, L.

    2014-12-01

    The radiative effects of ice clouds are often difficult to estimate accurately, but are very important for interpretation of observations and for climate modeling. Our understanding of these effects is primarily based on scattering calculations, but due to the variability in ice habit it is computationally difficult to determine the required scattering and absorption properties, and the difficulties are only compounded by the need to include consideration of air and carbon inclusions of the sort frequently observed in collected samples. Much of the previous work on effects of inclusions in ice particles on scattering properties has been conducted with variants of geometric optics methods. We report on simulations of scattering by ice crystals with enclosed air bubbles using the pseudo-spectral time domain method (PSTD) and improved geometric optics method (IGOM). A Bouncing Ball Model (BBM) is proposed as a parametrization of air bubbles, and the results are compared with Monte Carlo radiative transfer calculations. Consistent with earlier studies, we find that air inclusions lead to a smoothing of variations in the phase function, weakening of halos, and a reduction of backscattering. We extend these studies by examining the effects of the particular arrangement of a fixed number of bubbles, as well as the effects of splitting a given number of bubbles into a greater number of smaller bubbles with the same total volume fraction. The result shows that the phase function will not change much for stochastic distributed air bubbles. It also shows that local maxima of phase functions are smoothed out for backward directions, when we break bubbles into small ones, single big bubble scatter favors more forward scattering than multi small internal scatters.

  10. Ground motion simulation for the 23 August 2011, Mineral, Virginia earthquake using physics-based and stochastic broadband methods

    USGS Publications Warehouse

    Sun, Xiaodan; Hartzell, Stephen; Rezaeian, Sanaz

    2015-01-01

    Three broadband simulation methods are used to generate synthetic ground motions for the 2011 Mineral, Virginia, earthquake and compare with observed motions. The methods include a physics‐based model by Hartzell et al. (1999, 2005), a stochastic source‐based model by Boore (2009), and a stochastic site‐based model by Rezaeian and Der Kiureghian (2010, 2012). The ground‐motion dataset consists of 40 stations within 600 km of the epicenter. Several metrics are used to validate the simulations: (1) overall bias of response spectra and Fourier spectra (from 0.1 to 10 Hz); (2) spatial distribution of residuals for GMRotI50 peak ground acceleration (PGA), peak ground velocity, and pseudospectral acceleration (PSA) at various periods; (3) comparison with ground‐motion prediction equations (GMPEs) for the eastern United States. Our results show that (1) the physics‐based model provides satisfactory overall bias from 0.1 to 10 Hz and produces more realistic synthetic waveforms; (2) the stochastic site‐based model also yields more realistic synthetic waveforms and performs superiorly for frequencies greater than about 1 Hz; (3) the stochastic source‐based model has larger bias at lower frequencies (<0.5  Hz) and cannot reproduce the varying frequency content in the time domain. The spatial distribution of GMRotI50 residuals shows that there is no obvious pattern with distance in the simulation bias, but there is some azimuthal variability. The comparison between synthetics and GMPEs shows similar fall‐off with distance for all three models, comparable PGA and PSA amplitudes for the physics‐based and stochastic site‐based models, and systematic lower amplitudes for the stochastic source‐based model at lower frequencies (<0.5  Hz).

  11. Turbulent breakage of ductile aggregates.

    PubMed

    Marchioli, Cristian; Soldati, Alfredo

    2015-05-01

    In this paper we study breakage rate statistics of small colloidal aggregates in nonhomogeneous anisotropic turbulence. We use pseudospectral direct numerical simulation of turbulent channel flow and Lagrangian tracking to follow the motion of the aggregates, modeled as sub-Kolmogorov massless particles. We focus specifically on the effects produced by ductile rupture: This rupture is initially activated when fluctuating hydrodynamic stresses exceed a critical value, σ>σ(cr), and is brought to completion when the energy absorbed by the aggregate meets the critical breakage value. We show that ductile rupture breakage rates are significantly reduced with respect to the case of instantaneous brittle rupture (i.e., breakage occurs as soon as σ>σ(cr)). These discrepancies are due to the different energy values at play as well as to the statistical features of energy distribution in the anisotropic turbulence case examined.

  12. Acceleration of stable TTI P-wave reverse-time migration with GPUs

    NASA Astrophysics Data System (ADS)

    Kim, Youngseo; Cho, Yongchae; Jang, Ugeun; Shin, Changsoo

    2013-03-01

    When a pseudo-acoustic TTI (tilted transversely isotropic) coupled wave equation is used to implement reverse-time migration (RTM), shear wave energy is significantly included in the migration image. Because anisotropy has intrinsic elastic characteristics, coupling P-wave and S-wave modes in the pseudo-acoustic wave equation is inevitable. In RTM with only primary energy or the P-wave mode in seismic data, the S-wave energy is regarded as noise for the migration image. To solve this problem, we derive a pure P-wave equation for TTI media that excludes the S-wave energy. Additionally, we apply the rapid expansion method (REM) based on a Chebyshev expansion and a pseudo-spectral method (PSM) to calculate spatial derivatives in the wave equation. When REM is incorporated with the PSM for the spatial derivatives, wavefields with high numerical accuracy can be obtained without grid dispersion when performing numerical wave modeling. Another problem in the implementation of TTI RTM is that wavefields in an area with high gradients of dip or azimuth angles can be blown up in the progression of the forward and backward algorithms of the RTM. We stabilize the wavefields by applying a spatial-frequency domain high-cut filter when calculating the spatial derivatives using the PSM. In addition, to increase performance speed, the graphic processing unit (GPU) architecture is used instead of traditional CPU architecture. To confirm the degree of acceleration compared to the CPU version on our RTM, we then analyze the performance measurements according to the number of GPUs employed.

  13. Numerical Simulation of Protoplanetary Vortices

    NASA Technical Reports Server (NTRS)

    Lin, H.; Barranco, J. A.; Marcus, P. S.

    2003-01-01

    The fluid dynamics within a protoplanetary disk has been attracting the attention of many researchers for a few decades. Previous works include, to list only a few among many others, the well-known prescription of Shakura & Sunyaev, the convective and instability study of Stone & Balbus and Hawley et al., the Rossby wave approach of Lovelace et al., as well as a recent work by Klahr & Bodenheimer, which attempted to identify turbulent flow within the disk. The disk is commonly understood to be a thin gas disk rotating around a central star with differential rotation (the Keplerian velocity), and the central quest remains as how the flow behavior deviates (albeit by a small amount) from a strong balance established between gravitational and centrifugal forces, transfers mass and momentum inward, and eventually forms planetesimals and planets. In earlier works we have briefly described the possible physical processes involved in the disk; we have proposed the existence of long-lasting, coherent vortices as an efficient agent for mass and momentum transport. In particular, Barranco et al. provided a general mathematical framework that is suitable for the asymptotic regime of the disk; Barranco & Marcus (2000) addressed a proposed vortex-dust interaction mechanism which might lead to planetesimal formation; and Lin et al. (2002), as inspired by general geophysical vortex dynamics, proposed basic mechanisms by which vortices can transport mass and angular momentum. The current work follows up on our previous effort. We shall focus on the detailed numerical implementation of our problem. We have developed a parallel, pseudo-spectral code to simulate the full three-dimensional vortex dynamics in a stably-stratified, differentially rotating frame, which represents the environment of the disk. Our simulation is validated with full diagnostics and comparisons, and we present our results on a family of three-dimensional, coherent equilibrium vortices.

  14. Tests of dynamic Lagrangian eddy viscosity models in Large Eddy Simulations of flow over three-dimensional bluff bodies

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.

    2004-11-01

    Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.

  15. Reynolds Number Effect on Spatial Development of Viscous Flow Induced by Wave Propagation Over Bed Ripples

    NASA Astrophysics Data System (ADS)

    Dimas, Athanassios A.; Kolokythas, Gerasimos A.

    Numerical simulations of the free-surface flow, developing by the propagation of nonlinear water waves over a rippled bottom, are performed assuming that the corresponding flow is two-dimensional, incompressible and viscous. The simulations are based on the numerical solution of the Navier-Stokes equations subject to the fully-nonlinear free-surface boundary conditions and appropriate bottom, inflow and outflow boundary conditions. The equations are properly transformed so that the computational domain becomes time-independent. For the spatial discretization, a hybrid scheme is used where central finite-differences, in the horizontal direction, and a pseudo-spectral approximation method with Chebyshev polynomials, in the vertical direction, are applied. A fractional time-step scheme is used for the temporal discretization. Over the rippled bed, the wave boundary layer thickness increases significantly, in comparison to the one over flat bed, due to flow separation at the ripple crests, which generates alternating circulation regions. The amplitude of the wall shear stress over the ripples increases with increasing ripple height or decreasing Reynolds number, while the corresponding friction force is insensitive to the ripple height change. The amplitude of the form drag forces due to dynamic and hydrostatic pressures increase with increasing ripple height but is insensitive to the Reynolds number change, therefore, the percentage of friction in the total drag force decreases with increasing ripple height or increasing Reynolds number.

  16. The evolution of hyperboloidal data with the dual foliation formalism: mathematical analysis and wave equation tests

    NASA Astrophysics Data System (ADS)

    Hilditch, David; Harms, Enno; Bugner, Marcus; Rüter, Hannes; Brügmann, Bernd

    2018-03-01

    A long-standing problem in numerical relativity is the satisfactory treatment of future null-infinity. We propose an approach for the evolution of hyperboloidal initial data in which the outer boundary of the computational domain is placed at infinity. The main idea is to apply the ‘dual foliation’ formalism in combination with hyperboloidal coordinates and the generalized harmonic gauge formulation. The strength of the present approach is that, following the ideas of Zenginoğlu, a hyperboloidal layer can be naturally attached to a central region using standard coordinates of numerical relativity applications. Employing a generalization of the standard hyperboloidal slices, developed by Calabrese et al, we find that all formally singular terms take a trivial limit as we head to null-infinity. A byproduct is a numerical approach for hyperboloidal evolution of nonlinear wave equations violating the null-condition. The height-function method, used often for fixed background spacetimes, is generalized in such a way that the slices can be dynamically ‘waggled’ to maintain the desired outgoing coordinate lightspeed precisely. This is achieved by dynamically solving the eikonal equation. As a first numerical test of the new approach we solve the 3D flat space scalar wave equation. The simulations, performed with the pseudospectral bamps code, show that outgoing waves are cleanly absorbed at null-infinity and that errors converge away rapidly as resolution is increased.

  17. Direct and Large Eddy Simulation of non-equilibrium wall-bounded turbulent flows

    NASA Astrophysics Data System (ADS)

    Park, Hee-Jun

    2005-11-01

    The performance of several existing SGS models in non-equilibrium wall-bounded turbulent flows is investigated through comparisons of LES and DNS. The test problem is a shear-driven three-dimensional turbulent channel flow at base Reτ˜210 established by impulsive motion of one of the channel walls in the spanwise direction with a spanwise velocity equal to 3/4 of the bulk mean velocity in the channel. The DNS and LES are performed using pseudo-spectral methods with resolutions of 128x128x129 and 32x64x65, respectively. The SGS models tested include the nonlinear Interactions Approximation model (NIA) [Haliloglu and Akhavan (2004)], the Dynamic Smagorinsky model (DSM) [Germano et al. (1991)], and the Dynamic Mixed Model (DMM) [Zang et al. (1993)]. The results show that NIA gives the best overall agreement with DNS. Both DMM and DSM over-predict the decay of the mean streamwise wall shear stress on the moving wall, while NIA gives results in close agreements with DNS. Similarly, NIA gives the best agreement with DNS in the prediction of the mean velocity, the higher-order turbulence statistics, and the lag angle between the mean shear and the turbulent shear stress. These results suggest that non-equilibrium wall-bounded turbulent flows can be accurately computed by LES with NIA as the SGS model.

  18. Coupling extended magnetohydrodynamic fluid codes with radiofrequency ray tracing codes for fusion modeling

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Held, Eric D.

    2015-09-01

    Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.

  19. Numerical Simulation and Quantitative Uncertainty Assessment of Microchannel Flow

    NASA Astrophysics Data System (ADS)

    Debusschere, Bert; Najm, Habib; Knio, Omar; Matta, Alain; Ghanem, Roger; Le Maitre, Olivier

    2002-11-01

    This study investigates the effect of uncertainty in physical model parameters on computed electrokinetic flow of proteins in a microchannel with a potassium phosphate buffer. The coupled momentum, species transport, and electrostatic field equations give a detailed representation of electroosmotic and pressure-driven flow, including sample dispersion mechanisms. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. To quantify uncertainty, the governing equations are reformulated using a pseudo-spectral stochastic methodology, which uses polynomial chaos expansions to describe uncertain/stochastic model parameters, boundary conditions, and flow quantities. Integration of the resulting equations for the spectral mode strengths gives the evolution of all stochastic modes for all variables. Results show the spatiotemporal evolution of uncertainties in predicted quantities and highlight the dominant parameters contributing to these uncertainties during various flow phases. This work is supported by DARPA.

  20. Large Eddy Simulation of wind turbine wakes: detailed comparisons of two codes focusing on effects of numerics and subgrid modeling

    NASA Astrophysics Data System (ADS)

    Martínez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-01

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to be unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.

  1. Linear and weakly nonlinear aspects of free shear layer instability, roll-up, subharmonic interaction and wall influence

    NASA Technical Reports Server (NTRS)

    Cain, A. B.; Thompson, M. W.

    1986-01-01

    The growth of the momentum thickness and the modal disturbance energies are examined to study the nature and onset of nonlinearity in a temporally growing free shear layer. A shooting technique is used to find solutions to the linearized eigenvalue problem, and pseudospectral weakly nonlinear simulations of this flow are obtained for comparison. The roll-up of a fundamental disturbance follows linear theory predictions even with a 20 percent disturbance amplitude. A weak nonlinear interaction of the disturbance creates a finite-amplitude mean shear stress which dominates the growth of the layer momentum thickness, and the disturbance growth rate changes until the fundamental disturbance dominates. The fundamental then becomes an energy source for the harmonic, resulting in an increase in the growth rate of the subharmonic over the linear prediction even when the fundamental has no energy to give. Also considered are phase relations and the wall influence.

  2. Exploring the feasibility of focusing CW light through a scattering medium into closely spaced twin peaks via numerical solutions of Maxwell’s equations

    NASA Astrophysics Data System (ADS)

    Tseng, Snow H.; Chang, Shih-Hui

    2018-04-01

    Here we present a numerical simulation to analyze the effect of scattering on focusing light into closely-spaced twin peaks. The pseudospectral time-domain (PSTD) is implemented to model continuous-wave (CW) light propagation through a scattering medium. Simulations show that CW light can propagate through a scattering medium and focus into closely-spaced twin peaks. CW light of various wavelengths focusing into twin peaks with sub-diffraction spacing is simulated. In advance, light propagation through scattering media of various number densities is simulated to decipher the dependence of CW light focusing phenomenon on the scattering medium. The reported simulations demonstrate the feasibility of focusing CW light into twin peaks with sub-diffraction dimensions. More importantly, based upon numerical solutions of Maxwell’s equations, research findings show that the sub-diffraction focusing phenomenon can be achieved with scarce or densely-packed scattering media.

  3. Zero-Propellant Maneuver[TM] Flight Results for 180 deg ISS Rotation

    NASA Technical Reports Server (NTRS)

    Bedrossian, Nazareth; Bhatt, Sagar; Lammers, Mike; Nguyen, Louis

    2007-01-01

    This paper presents results for the Zero Propellant Maneuver (ZPM) TradeMark attitude control concept flight demonstration. On March 3, 2007, a ZPM was used to reorient the International Space Station 180 degrees without using any propellant. The identical reorientation performed with thrusters would have burned 110lbs of propellant. The ZPM was a pre-planned trajectory used to command the CMG attitude hold controller to perform the maneuver between specified initial and final states while maintaining the CMGs within their operational limits. The trajectory was obtained from a PseudoSpectral solution to a new optimal attitude control problem. The flight test established the breakthrough capability to simultaneously perform a large angle attitude maneuver and momentum desaturation without the need to use thrusters. The flight implementation did not require any modifications to flight software. This approach is applicable to any spacecraft that are controlled by momentum storage devices.

  4. Large Eddy Simulation of Wind Turbine Wakes. Detailed Comparisons of Two Codes Focusing on Effects of Numerics and Subgrid Modeling

    DOE PAGES

    Martinez-Tossas, Luis A.; Churchfield, Matthew J.; Meneveau, Charles

    2015-06-18

    In this work we report on results from a detailed comparative numerical study from two Large Eddy Simulation (LES) codes using the Actuator Line Model (ALM). The study focuses on prediction of wind turbine wakes and their breakdown when subject to uniform inflow. Previous studies have shown relative insensitivity to subgrid modeling in the context of a finite-volume code. The present study uses the low dissipation pseudo-spectral LES code from Johns Hopkins University (LESGO) and the second-order, finite-volume OpenFOAMcode (SOWFA) from the National Renewable Energy Laboratory. When subject to uniform inflow, the loads on the blades are found to bemore » unaffected by subgrid models or numerics, as expected. The turbulence in the wake and the location of transition to a turbulent state are affected by the subgrid-scale model and the numerics.« less

  5. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  6. Large-scale disruptions in a current-carrying magnetofluid

    NASA Technical Reports Server (NTRS)

    Dahlburg, J. P.; Montgomery, D.; Doolen, G. D.; Matthaeus, W. H.

    1986-01-01

    Internal disruptions in a strongly magnetized electrically conducting fluid contained within a rigid conducting cylinder of square cross section are investigated theoretically, both with and without an externally applied axial electric field, by means of computer simulations using the pseudospectral three-dimensional Strauss-equations code of Dahlburg et al. (1985). Results from undriven inviscid, driven inviscid, and driven viscid simulations are presented graphically, and the significant effects of low-order truncations on the modeling accuracy are considered. A helical current filament about the cylinder axis is observed. The ratio of turbulent kinetic energy to total poloidal magnetic energy is found to undergo cyclic bounces in the undriven inviscid case, to exhibit one large bounce followed by decay to a quasi-steady state with poloidal fluid velocity flow in the driven inviscid case, and to show one large bounce followed by further sawtoothlike bounces in the driven viscid case.

  7. Multiscale Analysis of Rapidly Rotating Dynamo Simulations

    NASA Astrophysics Data System (ADS)

    Orvedahl, R.; Calkins, M. A.; Featherstone, N. A.

    2017-12-01

    The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code Rayleigh to generate a suite of direct numerical simulations. Each simulation uses the Boussinesq approximation and is characterized by an Ekman number (Ek=ν /Ω L2) of 10-5. We vary the degree of convective forcing to obtain a range of convective Rossby numbers. The resulting flows and magnetic structures are analyzed using a Reynolds decomposition. We determine the relative importance of each term in the scale-separated governing equations and estimate the relevant spatial scales responsible for generating the mean magnetic field.

  8. Convective overshoot at the solar tachocline

    NASA Astrophysics Data System (ADS)

    Brown, Benjamin; Oishi, Jeffrey S.; Anders, Evan H.; Lecoanet, Daniel; Burns, Keaton; Vasil, Geoffrey M.

    2017-08-01

    At the base of the solar convection zone lies the solar tachocline. This internal interface is where motions from the unstable convection zone above overshoot and penetrate downward into the stiffly stable radiative zone below, driving gravity waves, mixing, and possibly pumping and storing magnetic fields. Here we study the dynamics of convective overshoot across very stiff interfaces with some properties similar to the internal boundary layer within the Sun. We use the Dedalus pseudospectral framework and study fully compressible dynamics at moderate to high Peclet number and low Mach number, probing a regime where turbulent transport is important, and where the compressible dynamics are similar to those of convective motions in the deep solar interior. We find that the depth of convective overshoot is well described by a simple buoyancy equilibration model, and we consider implications for dynamics at the solar tachocline and for the storage of magnetic fields there by overshooting convection.

  9. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  10. Contribution of ionospheric monitoring to tsunami warning: results from a benchmark exercise

    NASA Astrophysics Data System (ADS)

    Rolland, L.; Makela, J. J.; Drob, D. P.; Occhipinti, G.; Lognonne, P. H.; Kherani, E. A.; Sladen, A.; Rakoto, V.; Grawe, M.; Meng, X.; Komjathy, A.; Liu, T. J. Y.; Astafyeva, E.; Coisson, P.; Budzien, S. A.

    2016-12-01

    Deep ocean pressure sensors have proven very effective to quantify tsunami waves in real-time. Yet, the cost of these sensors and maintenance strongly limit the extensive deployment of dense networks. Thus a complete observation of the tsunami wave-field is not possible so far. In the last decade, imprints of moderate to large transpacific tsunami wave-fields have been registered in the ionosphere through the atmospheric internal gravity wave coupled with the tsunami during its propagation. Those ionospheric observations could provide a an additional description of the phenomenon with a high spatial coverage. Ionospheric observations have been supported by numerical modeling of the ocean-atmosphere-ionosphere coupling, developed by different groups. We present here the first results of a cross-validation exercise aimed at testing various forward simulation techniques. In particular, we compare different approaches for modeling tsunami-induced gravity waves including a pseudo-spectral method, finite difference schemes, a fully coupled normal modes modeling approach, a Fourier-Laplace compressible ray-tracing solution, and a self-consistent, three-dimensional physics-based wave perturbation (WP) model based on the augmented Global Thermosphere-Ionosphere Model (WP-GITM). These models and other existing models use either a realistic sea-surface motion input model or a simple analytic model. We discuss the advantages and drawbacks of the different methods and setup common inputs to the models so that meaningful comparisons of model outputs can be made to higlight physical conclusions and understanding. Nominally, we highlight how the different models reproduce or disagree for two study cases: the ionospheric observations related to the 2012 Mw7.7 Haida Gwaii, Canada, and 2015 Mw8.3 Illapel, Chile, events. Ultimately, we explore the possibility of computing a transfer function in order to convert ionospheric perturbations directly into tsunami height estimates.

  11. Common aero vehicle autonomous reentry trajectory optimization satisfying waypoint and no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Jorris, Timothy R.

    2007-12-01

    To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.

  12. A Highly Accurate Technique for the Treatment of Flow Equations at the Polar Axis in Cylindrical Coordinates using Series Expansions. Appendix A

    NASA Technical Reports Server (NTRS)

    Constantinescu, George S.; Lele, S. K.

    2001-01-01

    Numerical methods for solving the flow equations in cylindrical or spherical coordinates should be able to capture the behavior of the exact solution near the regions where the particular form of the governing equations is singular. In this work we focus on the treatment of these numerical singularities for finite-differences methods by reinterpreting the regularity conditions developed in the context of pseudo-spectral methods. A generally applicable numerical method for treating the singularities present at the polar axis, when nonaxisymmetric flows are solved in cylindrical, coordinates using highly accurate finite differences schemes (e.g., Pade schemes) on non-staggered grids, is presented. Governing equations for the flow at the polar axis are derived using series expansions near r=0. The only information needed to calculate the coefficients in these equations are the values of the flow variables and their radial derivatives at the previous iteration (or time) level. These derivatives, which are multi-valued at the polar axis, are calculated without dropping the accuracy of the numerical method using a mapping of the flow domain from (0,R)*(0,2pi) to (-R,R)*(0,pi), where R is the radius of the computational domain. This allows the radial derivatives to be evaluated using high-order differencing schemes (e.g., compact schemes) at points located on the polar axis. The proposed technique is illustrated by results from simulations of laminar-forced jets and turbulent compressible jets using large eddy simulation (LES) methods. In term of the general robustness of the numerical method and smoothness of the solution close to the polar axis, the present results compare very favorably to similar calculations in which the equations are solved in Cartesian coordinates at the polar axis, or in which the singularity is removed by employing a staggered mesh in the radial direction without a mesh point at r=0, following the method proposed recently by Mohseni and Colonius (1). Extension of the method described here for incompressible flows or for any other set of equations that are solved on a non-staggered mesh in cylindrical or spherical coordinates with finite-differences schemes of various level of accuracy is immediate.

  13. BOOK REVIEW: Introduction to 3+1 Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten

    2008-11-01

    This is the first major textbook on the methods of numerical relativity. The selection of material is based on what is known to work reliably in astrophysical applications and would therefore be considered by many as the 'mainstream' of the field. This means spacelike slices, the BSSNOK or harmonic formulation of the Einstein equations, finite differencing for the spacetime variables, and high-resolution shock capturing methods for perfect fluid matter. (Arguably, pseudo-spectral methods also belong in this category, at least for elliptic equations, but are not covered in this book.) The account is self-contained, and comprehensive within its chosen scope. It could serve as a primer for the growing number of review papers on aspects of numerical relativity published in Living Reviews in Relativity (LRR). I will now discuss the contents by chapter. Chapter 1, an introduction to general relativity, is clearly written, but may be a little too concise to be used as a first text on this subject at postgraduate level, compared to the textbook by Schutz or the first half of Wald's book. Chapter 2 contains a good introduction to the 3+1 split of the field equations in the form mainly given by York. York's pedagogical presentation (in a 1979 conference volume) is still up to date, but Alcubierre makes a clearer distinction between the geometric split and its form in adapted coordinates, as well as filling in some derivations. Chapter 3 on initial data is close to Cook's 2001 LRR, but is beautifully unified by an emphasis on how different choices of conformal weights suit different purposes. Chapter 4 on gauge conditions covers a topic on which no review paper exists, and which is spread thinly over many papers. The presentation is both detailed and unified, making this an excellent resource also for experts. The chapter reflects the author's research interests while remaining canonical. Chapter 5 covers hyperbolic reductions of the field equations. Alcubierre's excellent presentation is less technical than Reula's 1998 LRR or the 1995 book by Gustafsson, Kreiss and Oliger, but covers the key ideas in application to the Einstein equations. The reviewer (admittedly riding a hobby-horse) would argue that the hyperbolicity of the ADM and BSSNOK equations should have been investigated without introducing a specific first-order reduction. Chapter 6 covers gauge problems in numerical black hole spacetimes, black hole excision, and apparent horizons. Like chapter 4 it is both exhaustive and pedagogical. Perhaps more space than necessary is given here to work the author was involved in, while the section on slice stretching could have been more detailed, given that there is no good overview in the literature. Chapter 7 on relativistic hydrodynamics is, quite simply, excellent. Among many other useful things it contains some elementary material on equations of state that is not written up at this level elsewhere, a good mini-introduction to weak solutions of conservation laws, and a brief review of imperfect fluids in GR (Israel--Stewart theory). This chapter complements Font's 2008 LRR. Chapter 8 on gravitational wave extraction provides a welcome pedagogical introduction to a topic in which the original research papers are less than inviting and where notation is not uniform. The mathematical techniques described here are in constant use in numerical relativity codes, but are never fully described in research papers. Chapter 9 on numerical methods covers finite difference and high-resolution shock capturing methods. It is similar in presentation to Leveque's 1992 book and Kreiss and Busenhart's 2001 book, but gives a good selection of that material, concisely presented. It certainly impresses the importance of convergence testing on the reader. Chapter 10 covers methods for spherically symmetric and axisymmetric spacetimes. The former is excellent, reflecting the author's recent research work. The axisymmetry section would have been better if it had been based on a formal Geroch reduction, the method that has been the key to recent progress. This book is bound to become a standard text for beginning graduate students. In an overview for this audience, I would have liked to see a little more detail on null slicings and on the conformal field equations, and brief introductions to the theory of elliptic equations and to pseudo-spectral and finite element methods. One may also regret the many typographical errors. Nevertheless, this excellent book fills a real gap, and will be hard to follow.

  14. Multiscale Analysis of Rapidly Rotating Dynamo Simulations

    NASA Astrophysics Data System (ADS)

    Orvedahl, Ryan; Calkins, Michael; Featherstone, Nicholas

    2017-11-01

    The magnetic field of the planets and stars are generated by dynamo action in their electrically conducting fluid interiors. Numerical models of this process solve the fundamental equations of magnetohydrodynamics driven by convection in a rotating spherical shell. Rotation plays an important role in modifying the resulting convective flows and the self-generated magnetic field. We present results of simulating rapidly rotating systems that are unstable to dynamo action. We use the pseudo-spectral code Rayleigh to generate a suite of direct numerical simulations. Each simulation uses the Boussinesq approximation and is characterized by an Ekman number (Ek = ν / ΩL2) of 10-5. We vary the degree of convective forcing to obtain a range of convective Rossby numbers. The resulting flows and magnetic structures are analyzed using a Reynolds decomposition. We determine the relative importance of each term in the scale-separated governing equations and estimate the relevant spatial scales responsible for generating the mean magnetic field.

  15. Mixing and chemical reaction in sheared and nonsheared homogeneous turbulence

    NASA Technical Reports Server (NTRS)

    Leonard, Andy D.; Hill, James C.

    1992-01-01

    Direct numerical simulations were made to examine the local structure of the reaction zone for a moderately fast reaction between unmixed species in decaying, homogeneous turbulence and in a homogeneous turbulent shear flow. Pseudospectral techniques were used in domains of 64 exp 3 and higher wavenumbers. A finite-rate, single step reaction between non-premixed reactants was considered, and in one case temperature-dependent Arrhenius kinetics was assumed. Locally intense reaction rates that tend to persist throughout the simulations occur in locations where the reactant concentration gradients are large and are amplified by the local rate of strain. The reaction zones are more organized in the case of a uniform mean shear than in isotropic turbulence, and regions of intense reaction rate appear to be associated with vortex structures such as horseshoe vortices and fingers seen in mixing layers. Concentration gradients tend to align with the direction of the most compressive principal strain rate, more so in the isotropic case.

  16. The role of zonal flows in disc gravito-turbulence

    NASA Astrophysics Data System (ADS)

    Vanon, R.

    2018-07-01

    The work presented here focuses on the role of zonal flows in the self-sustenance of gravito-turbulence in accretion discs. The numerical analysis is conducted using a bespoke pseudo-spectral code in fully compressible, non-linear conditions. The disc in question, which is modelled using the shearing sheet approximation, is assumed to be self-gravitating, viscous, and thermally diffusive; a constant cooling time-scale is also considered. Zonal flows are found to emerge at the onset of gravito-turbulence and they remain closely linked to the turbulent state. A cycle of zonal flow formation and destruction is established, mediated by a slow mode instability (which allows zonal flows to grow) and a non-axisymmetric instability (which disrupts the zonal flow), which is found to repeat numerous times. It is in fact the disruptive action of the non-axisymmetric instability to form new leading and trailing shearing waves, allowing energy to be extracted from the background flow and ensuring the self-sustenance of the gravito-turbulent regime.

  17. The role of zonal flows in disc gravito-turbulence

    NASA Astrophysics Data System (ADS)

    Vanon, R.

    2018-04-01

    The work presented here focuses on the role of zonal flows in the self-sustenance of gravito-turbulence in accretion discs. The numerical analysis is conducted using a bespoke pseudo-spectral code in fully compressible, non-linear conditions. The disc in question, which is modelled using the shearing sheet approximation, is assumed to be self-gravitating, viscous, and thermally diffusive; a constant cooling timescale is also considered. Zonal flows are found to emerge at the onset of gravito-turbulence and they remain closely linked to the turbulent state. A cycle of zonal flow formation and destruction is established, mediated by a slow mode instability (which allows zonal flows to grow) and a non-axisymmetric instability (which disrupts the zonal flow), which is found to repeat numerous times. It is in fact the disruptive action of the non-axisymmetric instability to form new leading and trailing shearing waves, allowing energy to be extracted from the background flow and ensuring the self-sustenance of the gravito-turbulent regime.

  18. 3D-MHD Simulations of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Forest, C. B.; Wright, J. C.; O'Connell, R.

    2003-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations are used to predict behavior of the experiment. The code solves the self-consistent full evolution of the magnetic and velocity fields. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James [Proc. R. Soc. Lond. A 425. 407-429 (1989)]. Initial results indicate that saturation of the magnetic field occurs so that the resulting perturbed backreaction of the induced magnetic field changes the velocity field such that it would no longer be linearly unstable, suggesting non-linear terms are necessary for explaining the resulting state. Saturation and self-excitation depend in detail upon the magnetic Prandtl number.

  19. Numerical modelling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.; Truitt, J. L.

    2000-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a newly developed 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to the experimental results obtained from the experiment. The code, Dynamo, is in Fortran90 and allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the Navier-Stokes equation governing V are solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependant kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Initial results on magnetic field saturation, generated by the simultaneous evolution of magnetic and velocity fields be presented using a variety of mechanical forcing terms.

  20. Simulation and optimal control of wind-farm boundary layers

    NASA Astrophysics Data System (ADS)

    Meyers, Johan; Goit, Jay

    2014-05-01

    In large wind farms, the effect of turbine wakes, and their interaction leads to a reduction in farm efficiency, with power generated by turbines in a farm being lower than that of a lone-standing turbine by up to 50%. In very large wind farms or `deep arrays', this efficiency loss is related to interaction of the wind farms with the planetary boundary layer, leading to lower wind speeds at turbine level. Moreover, for these cases it has been demonstrated both in simulations and wind-tunnel experiments that the wind-farm energy extraction is dominated by the vertical turbulent transport of kinetic energy from higher regions in the boundary layer towards the turbine level. In the current study, we investigate the use of optimal control techniques combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large `infinite' wind farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with actuator-disk and actuator-line representations of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind that combines Fourier-spectral discretization in horizontal directions with a fourth-order finite-volume approach in the vertical direction. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in an actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. Optimal model-predictive control (or optimal receding horizon control) is used, where the model simply consists of the full LES equations, and the time horizon is approximately 280 seconds. The optimization is performed using a nonlinear conjugate gradient method, and the gradients are calculated by solving the adjoint LES equations. We find that the extracted farm power increases by approximately 20% when using optimal model-predictive control. However, the increased power output is also responsible for an increase in turbulent dissipation, and a deceleration of the boundary layer. Further investigating the energy balances in the boundary layer, it is observed that this deceleration is mainly occurring in the outer layer as a result of higher turbulent energy fluxes towards the turbines. In a second optimization case, we penalize boundary-layer deceleration, and find an increase of energy extraction of approximately 10%. In this case, increased energy extraction is balanced by a reduction in of turbulent dissipation in the boundary layer. J.M. acknowledges support from the European Research Council (FP7-Ideas, grant no. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Government.

  1. Numerical Simulation of Electromagnetic Field Variation in the Lithosphere-Atmosphere-Ionosphere Associated with Seismogenic Process in a Curvature Coordinate System

    NASA Astrophysics Data System (ADS)

    Liu, L.; Zhao, Z.; Wang, Y.; Huang, Q.

    2013-12-01

    The lithosphere-atmosphere- ionosphere (LAI) system formed an electromagnetic (EM) cavity that hosts the EM field excited by electric currents generated by lightning and other natural sources. There have also been numerous reports on variations of the EM field existing in LAI system prior to some significance earthquakes. We simulated the EM field in the lithosphere-ionosphere waveguide with a whole-earth model using a curvature coordinate by the hybrid pseudo-spectral and finite difference time domain method. Considering the seismogensis as a fully coupled seismoelectric process, we simulate the seismic wave and the EM wave in this 2D model. In the model we have observed the excitation of the Schumann Resonance (SR) as the background EM field generated by randomly placed electric-current impulses within the lowest 10 kilometers of the atmosphere. The diurnal variation and the latitude-dependence in ion concentration in the ionosphere are included in the model. After the SR reaching a steady state, an electric impulse is introduced in the shallow lithosphere to mimic the seismogenic process (pre-, co- and post-seismic) to assess the possible precursory effects on SR strength and frequency. The modeling results can explain the observed fact of why SR has a much more sensitive response to continental earthquakes, and much less response to oceanic events. The fundamental reason is simply due to the shielding effect of the conductive ocean that prevents effective radiation of the seismoelectric signals from oceanic earthquake events into the LAI waveguide.

  2. SGS Closure Methodology for Surface-layer Rough-wall Turbulence.

    NASA Astrophysics Data System (ADS)

    Brasseur, James G.; Juneja, Anurag

    1998-11-01

    As reported in another abstract, necessary under-resolution and anisotropy of integral scales near the surface in LES of rough-wall boundary layers cause errors in the statistical structure of the modeled subgrid-scale (SGS) acceleration using eddy viscosity and similarity closures. The essential difficulty is an overly strong coupling between the modeled SGS stress tensor and predicted resolved velocity u^r. Specific to this problem, we propose a class of SGS closures in which subgrid scale velocities u^s1 between an explicit filter scale Δ and the grid scale δ are estimated from the solution to a separate prognostic equation, and the SGS stress tensor is formed using u^s1 as a surrogate for subgrid velocity u^s. The method is currently under development for pseudo-spectral LES where a filter at scales δ < Δ is explicit. The exact evolution equation for u^s1 contains dynamical interactions between u^r and u^s1 which can be calculated directly, and a term which is modeled to capture energy flux from the s1 scales without altering u^s1 structure. Three levels of closure for SGS stress are possible at different levels of accuracy and computational expense. The cheapest model has been tested with DNS and LES of anisotropic buoyancy-driven turbulence. Preliminary results show major improvement in the structure of the predicted SGS acceleration with much of the spurious coupling between u^r and SGS stress removed. Performance, predictions and cost of the three levels of closure are under analysis.

  3. Modeling of the coupled magnetospheric and neutral wind dynamos

    NASA Technical Reports Server (NTRS)

    Thayer, Jeffrey P.

    1994-01-01

    This report summarizes the progress made in the first year of NASA Grant No. NAGW-3508 entitled 'Modeling of the Coupled Magnetospheric and Neutral Wind Dynamos.' The approach taken has been to impose magnetospheric boundary conditions with either pure voltage or current characteristics and solve the neutral wind dynamo equation under these conditions. The imposed boundary conditions determine whether the neutral wind dynamo will contribute to the high-latitude current system or the electric potential. The semi-annual technical report, dated December 15, 1993, provides further detail describing the scientific and numerical approach of the project. The numerical development has progressed and the dynamo solution for the case when the magnetosphere acts as a voltage source has been evaluated completely using spectral techniques. The simulation provides the field-aligned current distribution at high latitudes due to the neutral wind dynamo. A number of geophysical conditions can be simulated to evaluate the importance of the neutral wind dynamo contribution to the field-aligned current system. On average, field-aligned currents generated by the neutral wind dynamo contributed as much as 30 percent to the large-scale field-aligned current system driven by the magnetosphere. A term analysis of the high-latitude neutral wind dynamo equation describing the field aligned current distribution has also been developed to illustrate the important contributing factors involved in the process. The case describing the neutral dynamo response for a magnetosphere acting as a pure current generator requires the existing spectral code to be extended to a pseudo-spectral method and is currently under development.

  4. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    NASA Astrophysics Data System (ADS)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  5. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.

  6. Effect of soil conditions on predicted ground motion: Case study from Western Anatolia, Turkey

    NASA Astrophysics Data System (ADS)

    Gok, Elcin; Chávez-García, Francisco J.; Polat, Orhan

    2014-04-01

    We present a site effect study for the city of Izmir, Western Anatolia, Turkey. Local amplification was evaluated using state-of-practice tools. Ten earthquakes recorded at 16 sites were analysed using spectral ratios relative to a reference site, horizontal-to-vertical spectral ratios, and an inversion scheme of the Fourier amplitude spectra of the recorded S-waves. Seismic noise records were also used to estimate site effects. The different estimates are in good agreement among them, although a basic uncertainty of a factor of 2 seems difficult to decrease. We used our site effect estimates to predict ground motion in Izmir for a possible M6.5 earthquake close to the city using stochastic modelling. Site effects have a large impact on PSV (pseudospectral velocity), where local amplification increases amplitudes by almost a factor of 9 at 1 Hz relative to the firm ground condition. Our results allow identifying the neighbourhoods of Izmir where hazard mitigation measurements are a priority task and will also be useful for planning urban development.

  7. Chemically reacting fluid flow in exoplanet and brown dwarf atmospheres

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2016-11-01

    In the past few decades, spectral observations of planets and brown dwarfs have demonstrated significant deviations from predictions in certain chemical abundances. Starting with Jupiter, these deviations were successfully explained to be the effect of fast dynamics on comparatively slow chemical reactions. These dynamical effects are treated using mixing length theory in what is known as the "quench" approximation. In these objects, however, both radiative and convective zones are present, and it is not clear that this approximation applies. To resolve this issue, we solve the fully compressible equations of fluid dynamics in a matched polytropic atmosphere using the state-of-the-art pseudospectral simulation framework Dedalus. Through the inclusion of passive tracers, we explore the transport properties of convective and radiative zones, and verify the classical eddy diffusion parameterization. With the addition of active tracers, we examine the interactions between dynamical and chemical processes using abstract chemical reactions. By locating the quench point (the point at which the dynamical and chemical timescales are the same) in different dynamical regimes, we test the quench approximation, and generate prescriptions for the exoplanet and brown dwarf communities.

  8. NGA-West2 equations for predicting vertical-component PGA, PGV, and 5%-damped PSA from shallow crustal earthquakes

    USGS Publications Warehouse

    Stewart, Jonathan P.; Boore, David M.; Seyhan, Emel; Atkinson, Gail M.

    2016-01-01

    We present ground motion prediction equations (GMPEs) for computing natural log means and standard deviations of vertical-component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. The functions are similar to those for our horizontal GMPEs. We derive equations for the primary M- and distance-dependence of peak acceleration, peak velocity, and 5%-damped pseudo-spectral accelerations at oscillator periods between 0.01–10 s. We observe pronounced M-dependent geometric spreading and region-dependent anelastic attenuation for high-frequency IMs. We do not observe significant region-dependence in site amplification. Aleatory uncertainty is found to decrease with increasing magnitude; within-event variability is independent of distance. Compared to our horizontal-component GMPEs, attenuation rates are broadly comparable (somewhat slower geometric spreading, faster apparent anelastic attenuation), VS30-scaling is reduced, nonlinear site response is much weaker, within-event variability is comparable, and between-event variability is greater.

  9. Influence of chain rigidity on the conformation of model lipid membranes in the presence of cylindrical nanoparticle inclusions

    NASA Astrophysics Data System (ADS)

    Diloreto, Chris; Wickham, Robert

    2012-02-01

    We employ real-space self-consistent field theory to study the conformation of model lipid membranes in the presence of solvent and cylindrical nanoparticle inclusions (''peptides''). Whereas it is common to employ a polymeric Gaussian chain model for the lipids, here we model the lipids as persistent, worm-like chains. Our motivation is to develop a more realistic field theory to describe the action of pore-forming anti-microbial peptides that disrupt the bacterial cell membrane. We employ operator-splitting and a pseudo-spectral algorithm, using SpharmonicKit for the chain tangent degrees of freedom, to solve for the worm-like chain propagator. The peptides, modelled using a mask function, have a surface patterned with hydrophobic and hydrophillic patches, but no charge. We examine the role chain rigidity plays in the hydrophobic mismatch, the membrane-mediated interaction between two peptides, the size and structure of pores formed by peptide aggregates, and the free-energy barrier for peptide insertion into the membrane. Our results suggest that chain rigidity influences both the pore structure and the mechanism of pore formation.

  10. Numerical modeling of the Madison Dynamo Experiment.

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Wright, J. C.; Forest, C. B.; O'Connell, R.

    2002-11-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations will be compared to results obtained from the experiment. The code, Dynamo (Fortran90), allows for full evolution of the magnetic and velocity fields. The induction equation governing B and the curl of the momentum equation governing V are separately or simultaneously solved. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James (M.L. Dudley and R.W. James, Time-dependent kinematic dynamos with stationary flows, Proc. R. Soc. Lond. A 425, p. 407 (1989)). Power balance in the system has been verified in both mechanically driven and perturbed hydrodynamic, kinematic, and dynamic cases. Evolution of the vacuum magnetic field has been added to facilitate comparison with the experiment. Modeling of the Madison Dynamo eXperiment will be presented.

  11. Reconstructing the Cenozoic evolution of the mantle: Implications for mantle plume dynamics under the Pacific and Indian plates

    NASA Astrophysics Data System (ADS)

    Glišović, Petar; Forte, Alessandro M.

    2014-03-01

    The lack of knowledge of the initial thermal state of the mantle in the geological past is an outstanding problem in mantle convection. The resolution of this problem also requires the modelling of 3-D mantle evolution that yields maximum consistency with a wide suite of geophysical constraints. Quantifying the robustness of the reconstructed thermal evolution is another major concern. To solve and estimate the robustness of the time-reversed (inverse) problem of mantle convection, we analyse two different numerical techniques: the quasi-reversible (QRV) and the backward advection (BAD) methods. Our investigation extends over the 65 Myr interval encompassing the Cenozoic era using a pseudo-spectral solution for compressible-flow thermal convection in 3-D spherical geometry. We find that the two dominant issues for solving the inverse problem of mantle convection are the choice of horizontally-averaged temperature (i.e., geotherm) and mechanical surface boundary conditions. We find, in particular, that the inclusion of thermal boundary layers that yield Earth-like heat flux at the top and bottom of the mantle has a critical impact on the reconstruction of mantle evolution. We have developed a new regularisation scheme for the QRV method using a time-dependent regularisation function. This revised implementation of the QRV method delivers time-dependent reconstructions of mantle heterogeneity that reveal: (1) the stability of Pacific and African ‘large low shear velocity provinces’ (LLSVP) over the last 65 Myr; (2) strong upward deflections of the CMB topography at 65 Ma beneath: the North Atlantic, the south-central Pacific, the East Pacific Rise (EPR) and the eastern Antarctica; (3) an anchored deep-mantle plume ascending directly under the EPR (Easter and Pitcairn hotspots) throughout the Cenozoic era; and (4) the appearance of the transient Reunion plume head beneath the western edge of the Deccan Plateau at 65 Ma. Our reconstructions of Cenozoic mantle evolution thus suggest that mantle plumes play a key role in driving surface tectonic processes and large-scale volcanism.

  12. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  13. Aerodynamic Ground Effect in Fruitfly Sized Insect Takeoff

    PubMed Central

    Kolomenskiy, Dmitry; Maeda, Masateru; Engels, Thomas; Liu, Hao; Schneider, Kai; Nave, Jean-Christophe

    2016-01-01

    Aerodynamic ground effect in flapping-wing insect flight is of importance to comparative morphologies and of interest to the micro-air-vehicle (MAV) community. Recent studies, however, show apparently contradictory results of either some significant extra lift or power savings, or zero ground effect. Here we present a numerical study of fruitfly sized insect takeoff with a specific focus on the significance of leg thrust and wing kinematics. Flapping-wing takeoff is studied using numerical modelling and high performance computing. The aerodynamic forces are calculated using a three-dimensional Navier–Stokes solver based on a pseudo-spectral method with volume penalization. It is coupled with a flight dynamics solver that accounts for the body weight, inertia and the leg thrust, while only having two degrees of freedom: the vertical and the longitudinal horizontal displacement. The natural voluntary takeoff of a fruitfly is considered as reference. The parameters of the model are then varied to explore possible effects of interaction between the flapping-wing model and the ground plane. These modified takeoffs include cases with decreased leg thrust parameter, and/or with periodic wing kinematics, constant body pitch angle. The results show that the ground effect during natural voluntary takeoff is negligible. In the modified takeoffs, when the rate of climb is slow, the difference in the aerodynamic forces due to the interaction with the ground is up to 6%. Surprisingly, depending on the kinematics, the difference is either positive or negative, in contrast to the intuition based on the helicopter theory, which suggests positive excess lift. This effect is attributed to unsteady wing-wake interactions. A similar effect is found during hovering. PMID:27019208

  14. Thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model.

    PubMed

    Köllner, Thomas; Boeck, Thomas; Schumacher, Jörg

    2017-05-01

    The combined effects of buoyancy-driven Rayleigh-Bénard convection (RC) and surface tension-driven Marangoni convection (MC) are studied in a triple-layer configuration which serves as a simplified model for a liquid metal battery (LMB). The three-layer model consists of a liquid metal alloy cathode, a molten salt separation layer, and a liquid metal anode at the top. Convection is triggered by the temperature gradient between the hot electrolyte and the colder electrodes, which is a consequence of the release of resistive heat during operation. We present a linear stability analysis of the state of pure thermal conduction in combination with three-dimensional direct numerical simulations of the nonlinear turbulent evolution on the basis of a pseudospectral method. Five different modes of convection are identified in the configuration, which are partly coupled to each other: RC in the upper electrode, RC with internal heating in the molten salt layer, and MC at both interfaces between molten salt and electrode as well as anticonvection in the middle layer and lower electrode. The linear stability analysis confirms that the additional Marangoni effect in the present setup increases the growth rates of the linearly unstable modes, i.e., Marangoni and Rayleigh-Bénard instability act together in the molten salt layer. The critical Grashof and Marangoni numbers decrease with increasing middle layer thickness. The calculated thresholds for the onset of convection are found for realistic current densities of laboratory-sized LMBs. The global turbulent heat transfer follows scaling predictions for internally heated RC. The global turbulent momentum transfer is comparable with turbulent convection in the classical Rayleigh-Bénard case. In summary, our studies show that incorporating Marangoni effects generates smaller flow structures, alters the velocity magnitudes, and enhances the turbulent heat transfer across the triple-layer configuration.

  15. Thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model

    NASA Astrophysics Data System (ADS)

    Köllner, Thomas; Boeck, Thomas; Schumacher, Jörg

    2017-05-01

    The combined effects of buoyancy-driven Rayleigh-Bénard convection (RC) and surface tension-driven Marangoni convection (MC) are studied in a triple-layer configuration which serves as a simplified model for a liquid metal battery (LMB). The three-layer model consists of a liquid metal alloy cathode, a molten salt separation layer, and a liquid metal anode at the top. Convection is triggered by the temperature gradient between the hot electrolyte and the colder electrodes, which is a consequence of the release of resistive heat during operation. We present a linear stability analysis of the state of pure thermal conduction in combination with three-dimensional direct numerical simulations of the nonlinear turbulent evolution on the basis of a pseudospectral method. Five different modes of convection are identified in the configuration, which are partly coupled to each other: RC in the upper electrode, RC with internal heating in the molten salt layer, and MC at both interfaces between molten salt and electrode as well as anticonvection in the middle layer and lower electrode. The linear stability analysis confirms that the additional Marangoni effect in the present setup increases the growth rates of the linearly unstable modes, i.e., Marangoni and Rayleigh-Bénard instability act together in the molten salt layer. The critical Grashof and Marangoni numbers decrease with increasing middle layer thickness. The calculated thresholds for the onset of convection are found for realistic current densities of laboratory-sized LMBs. The global turbulent heat transfer follows scaling predictions for internally heated RC. The global turbulent momentum transfer is comparable with turbulent convection in the classical Rayleigh-Bénard case. In summary, our studies show that incorporating Marangoni effects generates smaller flow structures, alters the velocity magnitudes, and enhances the turbulent heat transfer across the triple-layer configuration.

  16. Acoustic Wave Propagation in Snow Based on a Biot-Type Porous Model

    NASA Astrophysics Data System (ADS)

    Sidler, R.

    2014-12-01

    Despite the fact that acoustic methods are inexpensive, robust and simple, the application of seismic waves to snow has been sparse. This might be due to the strong attenuation inherent to snow that prevents large scale seismic applications or due to the somewhat counterintuitive acoustic behavior of snow as a porous material. Such materials support a second kind of compressional wave that can be measured in fresh snow and which has a decreasing wave velocity with increasing density of snow. To investigate wave propagation in snow we construct a Biot-type porous model of snow as a function of porosity based on the assumptions that the solid frame is build of ice, the pore space is filled with a mix of air, or air and water, and empirical relationships for the tortuosity, the permeability, the bulk, and the shear modulus.We use this reduced model to investigate compressional and shear wave velocities of snow as a function of porosity and to asses the consequences of liquid water in the snowpack on acoustic wave propagation by solving Biot's differential equations with plain wave solutions. We find that the fast compressional wave velocity increases significantly with increasing density, but also that the fast compressional wave velocity might be even lower than the slow compressional wave velocity for very light snow. By using compressional and shear strength criteria and solving Biot's differential equations with a pseudo-spectral approach we evaluate snow failure due to acoustic waves in a heterogeneous snowpack, which we think is an important mechanism in triggering avalanches by explosives as well as by skiers. Finally, we developed a low cost seismic acquisition device to assess the theoretically obtained wave velocities in the field and to explore the possibility of an inexpensive tool to remotely gather snow water equivalent.

  17. Low stress drops observed for aftershocks of the 2011 Mw 5.7 Prague, Oklahoma, earthquake

    NASA Astrophysics Data System (ADS)

    Sumy, Danielle F.; Neighbors, Corrie J.; Cochran, Elizabeth S.; Keranen, Katie M.

    2017-05-01

    In November 2011, three Mw ≥ 4.8 earthquakes and thousands of aftershocks occurred along the structurally complex Wilzetta fault system near Prague, Oklahoma. Previous studies suggest that wastewater injection induced a Mw 4.8 foreshock, which subsequently triggered a Mw 5.7 mainshock. We examine source properties of aftershocks with a standard Brune-type spectral model and jointly solve for seismic moment (M0), corner frequency (f0), and kappa (κ) with an iterative Gauss-Newton global downhill optimization method. We examine 934 earthquakes with initial moment magnitudes (Mw) between 0.33 and 4.99 based on the pseudospectral acceleration and recover reasonable M0, f0, and κ for 87 earthquakes with Mw 1.83-3.51 determined by spectral fit. We use M0 and f0 to estimate the Brune-type stress drop, assuming a circular fault and shear-wave velocity at the hypocentral depth of the event. Our observations suggest that stress drops range between 0.005 and 4.8 MPa with a median of 0.2 MPa (0.03-26.4 MPa with a median of 1.1 MPa for Madariaga-type), which is significantly lower than typical eastern United States intraplate events (>10 MPa). We find that stress drops correlate weakly with hypocentral depth and magnitude. Additionally, we find the stress drops increase with time after the mainshock, although temporal variation in stress drop is difficult to separate from spatial heterogeneity and changing event locations. The overall low median stress drop suggests that the fault segments may have been primed to fail as a result of high pore fluid pressures, likely related to nearby wastewater injection.

  18. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.

  19. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    The mathematical formulation of wave propagation in porous media developed by Biot is based upon the principle of virtual work, ignoring processes at the microscopic level, and does not explicitly incorporate gradients in porosity. Based on recent studies focusing on averaging techniques, we derive the macroscopic porous medium equations from the microscale, with a particular emphasis on the effects of gradients in porosity. In doing so, we are able to naturally determine two key terms in the momentum equations and constitutive relationships, directly translating the coupling between the solid and fluid phases, namely a drag force and an interfacial strain tensor. In both terms, gradients in porosity arise. One remarkable result is that when we rewrite this set of equations in terms of the well known Biot variables us, w), terms involving gradients in porosity are naturally accommodated by gradients involving w, the fluid motion relative to the solid, and Biot's formulation is recovered, i.e., it remains valid in the presence of porosity gradients We have developed a numerical implementation of the Biot equations for two-dimensional problems based upon the spectral-element method (SEM) in the time domain. The SEM is a high-order variational method, which has the advantage of accommodating complex geometries like a finite-element method, while keeping the exponential convergence rate of (pseudo)spectral methods. As in the elastic and acoustic cases, poroelastic wave propagation based upon the SEM involves a diagonal mass matrix, which leads to explicit time integration schemes that are well-suited to simulations on parallel computers. Effects associated with physical dispersion & attenuation and frequency-dependent viscous resistance are addressed by using a memory variable approach. Various benchmarks involving poroelastic wave propagation in the high- and low-frequency regimes, and acoustic-poroelastic and poroelastic-poroelastic discontinuities have been successfully performed. We present finite-frequency sensitivity kernels for wave propagation in porous media based upon adjoint methods. We first show that the adjoint equations in porous media are similar to the regular Biot equations upon defining an appropriate adjoint source. Then we present finite-frequency kernels for seismic phases in porous media (e.g., fast P, slow P, and S). These kernels illustrate the sensitivity of seismic observables to structural parameters and form the basis of tomographic inversions. Finally, we show an application of this imaging technique related to the detection of buried landmines and unexploded ordnance (UXO) in porous environments.

  20. Protein labeling reactions in electrochemical microchannel flow: Numerical simulation and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Debusschere, Bert J.; Najm, Habib N.; Matta, Alain; Knio, Omar M.; Ghanem, Roger G.; Le Maître, Olivier P.

    2003-08-01

    This paper presents a model for two-dimensional electrochemical microchannel flow including the propagation of uncertainty from model parameters to the simulation results. For a detailed representation of electroosmotic and pressure-driven microchannel flow, the model considers the coupled momentum, species transport, and electrostatic field equations, including variable zeta potential. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. Uncertainty from the model parameters and boundary conditions is propagated to the model predictions using a pseudo-spectral stochastic formulation with polynomial chaos (PC) representations for parameters and field quantities. Using a Galerkin approach, the governing equations are reformulated into equations for the coefficients in the PC expansion. The implementation of the physical model with the stochastic uncertainty propagation is applied to protein-labeling in a homogeneous buffer, as well as in two-dimensional electrochemical microchannel flow. The results for the two-dimensional channel show strong distortion of sample profiles due to ion movement and consequent buffer disturbances. The uncertainty in these results is dominated by the uncertainty in the applied voltage across the channel.

  1. Control of viscous fingering by nanoparticles

    NASA Astrophysics Data System (ADS)

    Sabet, Nasser; Hassanzadeh, Hassan; Abedi, Jalal

    2017-12-01

    A substantial viscosity increase by the addition of a low dose of nanoparticles to the base fluids can well influence the dynamics of viscous fingering. There is a lack of detailed theoretical studies that address the effect of the presence of nanoparticles on unstable miscible displacements. In this study, the impact of nonreactive nanoparticle presence on the stability and subsequent mixing of an originally unstable binary system is examined using linear stability analysis (LSA) and pseudospectral-based direct numerical simulations (DNS). We have parametrized the role of both nondepositing and depositing nanoparticles on the stability of miscible displacements using the developed static and dynamic parametric analyses. Our results show that nanoparticles have the potential to weaken the instabilities of an originally unstable system. Our LSA and DNS results also reveal that nondepositing nanoparticles can be used to fully stabilize an originally unstable front while depositing particles may act as temporary stabilizers whose influence diminishes in the course of time. In addition, we explain the existing inconsistencies concerning the effect of the nanoparticle diffusion coefficient on the dynamics of the system. This study provides a basis for further research on the application of nanoparticles for control of viscosity-driven instabilities.

  2. Prediction of spectral acceleration response ordinates based on PGA attenuation

    USGS Publications Warehouse

    Graizer, V.; Kalkan, E.

    2009-01-01

    Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.

  3. Frontiers in Numerical Relativity

    NASA Astrophysics Data System (ADS)

    Evans, Charles R.; Finn, Lee S.; Hobill, David W.

    2011-06-01

    Preface; Participants; Introduction; 1. Supercomputing and numerical relativity: a look at the past, present and future David W. Hobill and Larry L. Smarr; 2. Computational relativity in two and three dimensions Stuart L. Shapiro and Saul A. Teukolsky; 3. Slowly moving maximally charged black holes Robert C. Ferrell and Douglas M. Eardley; 4. Kepler's third law in general relativity Steven Detweiler; 5. Black hole spacetimes: testing numerical relativity David H. Bernstein, David W. Hobill and Larry L. Smarr; 6. Three dimensional initial data of numerical relativity Ken-ichi Oohara and Takashi Nakamura; 7. Initial data for collisions of black holes and other gravitational miscellany James W. York, Jr.; 8. Analytic-numerical matching for gravitational waveform extraction Andrew M. Abrahams; 9. Supernovae, gravitational radiation and the quadrupole formula L. S. Finn; 10. Gravitational radiation from perturbations of stellar core collapse models Edward Seidel and Thomas Moore; 11. General relativistic implicit radiation hydrodynamics in polar sliced space-time Paul J. Schinder; 12. General relativistic radiation hydrodynamics in spherically symmetric spacetimes A. Mezzacappa and R. A. Matzner; 13. Constraint preserving transport for magnetohydrodynamics John F. Hawley and Charles R. Evans; 14. Enforcing the momentum constraints during axisymmetric spacelike simulations Charles R. Evans; 15. Experiences with an adaptive mesh refinement algorithm in numerical relativity Matthew W. Choptuik; 16. The multigrid technique Gregory B. Cook; 17. Finite element methods in numerical relativity P. J. Mann; 18. Pseudo-spectral methods applied to gravitational collapse Silvano Bonazzola and Jean-Alain Marck; 19. Methods in 3D numerical relativity Takashi Nakamura and Ken-ichi Oohara; 20. Nonaxisymmetric rotating gravitational collapse and gravitational radiation Richard F. Stark; 21. Nonaxisymmetric neutron star collisions: initial results using smooth particle hydrodynamics Christopher S. Kochanek and Charles R. Evans; 22. Relativistic hydrodynamics James R. Wilson and Grant J. Mathews; 23. Computational dynamics of U(1) gauge strings: probability of reconnection of cosmic strings Richard A. Matzner; 24. Dynamically inhomogenous cosmic nucleosynthesis Hannu Kurki-Suonio; 25. Initial value solutions in planar cosmologies Peter Anninos, Joan Centrella and Richard Matzner; 26. An algorithmic overview of an Einstein solver Roger Ove; 27. A PDE compiler for full-metric numerical relativity Jonathan Thornburg; 28. Numerical evolution on null cones R. Gomez and J. Winicour; 29. Normal modes coupled to gravitational waves in a relativistic star Yasufumi Kojima; 30. Cosmic censorship and numerical relativity Dalia S. Goldwirth, Amos Ori and Tsvi Piran.

  4. 77 FR 31756 - Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-30

    ...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating Methods: Public Meeting AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption...

  5. Mesh Dependence on Shear Driven Boundary Layers in Stable Stratification Generated by Large Eddy-Simulation

    NASA Astrophysics Data System (ADS)

    Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.

    2017-11-01

    The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.

  6. Dynamics and Chemistry in Jovian Atmospheres: 2D Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Bordwell, B. R.; Brown, B. P.; Oishi, J.

    2016-12-01

    A key component of our understanding of the formation and evolution of planetary systems is chemical composition. Problematically, however, in the atmospheres of cooler gas giants, dynamics on the same timescale as chemical reactions pull molecular abundances out of thermochemical equilibrium. These disequilibrium abundances are treated using what is known as the "quench" approximation, based upon the mixing length theory of convection. The validity of this approximation is questionable, though, as the atmospheres of gas giants encompass two distinct dynamic regimes: convective and radiative. To resolve this issue, we conduct 2D hydrodynamical simulations using the state-of-the-art pseudospectral simulation framework Dedalus. In these simulations, we solve the fully compressible equations of fluid motion in a local slab geometry that mimics the structure of a planetary atmosphere (convective zone underlying a radiative zone). Through the inclusion of passive tracers, we explore the transport properties of both regimes, and assess the validity of the classical eddy diffusion parameterization. With the addition of active tracers, we examine the interactions between dynamical and chemical processes, and generate prescriptions for the observational community. By providing insight into mixing and feedback mechanisms in Jovian atmospheres, this research lays a solid foundation for future global simulations and the construction of physically-sound models for current and future observations.

  7. Ground motion observations of the 2014 South Napa earthquake

    USGS Publications Warehouse

    Baltay, Annemarie S.; Boatwright, John

    2015-01-01

    Using the ground‐motion data compiled and reported by ShakeMap (Wald et al., 2000), we examine the peak ground acceleration (PGA) and peak ground velocity (PGV), as well as the pseudospectral acceleration (PSA) at periods of 0.3, 1.0, and 3.0 s. At the higher frequencies, especially PGA, data recorded at close distances (within ∼20  km) are very consistent with the GMPEs, implying a stress drop for this event similar to the median for California, that is, 5 MPa (Baltay and Hanks, 2014). At all frequencies, the attenuation with distance is stronger than the GMPEs would predict, which suggests the attenuation in the Napa and San Francisco Bay delta region is stronger than the average attenuation in California. The spatial plot of the ground‐motion residuals is positive to the north, in both Napa and Sonoma Valleys, consistent with increases in amplitude expected from both the directivity and basin effects. More interestingly, perhaps, there is strong ground motion to the south in the along‐strike direction, particularly for PSA at 1.0 s. These strongly positive residuals align with an older, Quaternary fault structure associated with the Franklin or Southampton fault, potentially indicating a fault‐zone‐guided wave.

  8. Trajectory optimization study of a lifting body re-entry vehicle for medium to intermediate range applications

    NASA Astrophysics Data System (ADS)

    Rizvi, S. Tauqeer ul Islam; Linshu, He; ur Rehman, Tawfiq; Rafique, Amer Farhan

    2012-11-01

    A numerical optimization study of lifting body re-entry vehicles is presented for nominal as well as shallow entry conditions for Medium and Intermediate Range applications. Due to the stringent requirement of a high degree of accuracy for conventional vehicles, lifting re-entry can be used to attain the impact at the desired terminal flight path angle and speed and thus can potentially improve accuracy of the re-entry vehicle. The re-entry of a medium range and intermediate range vehicles is characterized by very high negative flight path angle and low re-entry speed as compared to a maneuverable re-entry vehicle or a common aero vehicle intended for an intercontinental range. Highly negative flight path angles at the re-entry impose high dynamic pressure as well as heat loads on the vehicle. The trajectory studies are carried out to maximize the cross range of the re-entry vehicle while imposing a maximum dynamic pressure constraint of 350 KPa with a 3 MW/m2 heat rate limit. The maximum normal acceleration and the total heat load experienced by the vehicle at the stagnation point during the maneuver have been computed for the vehicle for possible future conceptual design studies. It has been found that cross range capability of up to 35 km can be achieved with a lifting-body design within the heat rate and the dynamic pressure boundary at normal entry conditions. For shallow entry angle of -20 degree and intermediate ranges a cross range capability of up to 250 km can be attained for a lifting body design with less than 10 percent loss in overall range. The normal acceleration also remains within limits. The lifting-body results have also been compared with wing-body results at shallow entry condition. An hp-adaptive pseudo-spectral method has been used for constrained trajectory optimization.

  9. Inner core boundary topography explored with reflected and diffracted P waves

    NASA Astrophysics Data System (ADS)

    deSilva, Susini; Cormier, Vernon F.; Zheng, Yingcai

    2018-03-01

    The existence of topography of the inner core boundary (ICB) can affect the amplitude, phase, and coda of body waves incident on the inner core. By applying pseudospectral and boundary element methods to synthesize compressional waves interacting with the ICB, these effects are predicted and compared with waveform observations in pre-critical, critical, post-critical, and diffraction ranges of the PKiKP wave reflected from the ICB. These data sample overlapping regions of the inner core beneath the circum-Pacific belt and the Eurasian, North American, and Australian continents, but exclude large areas beneath the Pacific and Indian Oceans and the poles. In the pre-critical range, PKiKP waveforms require an upper bound of 2 km at 1-20 km wavelength for any ICB topography. Higher topography sharply reduces PKiKP amplitude and produces time-extended coda not observed in PKiKP waveforms. The existence of topography of this scale smooths over minima and zeros in the pre-critical ICB reflection coefficient predicted from standard earth models. In the range surrounding critical incidence (108-130 °), this upper bound of topography does not strongly affect the amplitude and waveform behavior of PKIKP + PKiKP at 1.5 Hz, which is relatively insensitive to 10-20 km wavelength topography height approaching 5 km. These data, however, have a strong overlap in the regions of the ICB sampled by pre-critical PKiKP that require a 2 km upper bound to topography height. In the diffracted range (>152°), topography as high as 5 km attenuates the peak amplitudes of PKIKP and PKPCdiff by similar amounts, leaving the PKPCdiff/PKIKP amplitude ratio unchanged from that predicted by a smooth ICB. The observed decay of PKPCdiff into the inner core shadow and the PKIKP-PKPCdiff differential travel time are consistent with a flattening of the outer core P velocity gradient near the ICB and iron enrichment at the bottom of the outer core.

  10. Low stress drops observed for aftershocks of the 2011 Mw 5.7 Prague, Oklahoma, earthquake

    USGS Publications Warehouse

    Sumy, Danielle F.; Neighbors, Corrie J.; Cochran, Elizabeth S.; Keranen, Katie M.

    2017-01-01

    In November 2011, three Mw ≥ 4.8 earthquakes and thousands of aftershocks occurred along the structurally complex Wilzetta fault system near Prague, Oklahoma. Previous studies suggest that wastewater injection induced a Mw 4.8 foreshock, which subsequently triggered a Mw 5.7 mainshock. We examine source properties of aftershocks with a standard Brune-type spectral model and jointly solve for seismic moment (M0), corner frequency (f0), and kappa (κ) with an iterative Gauss-Newton global downhill optimization method. We examine 934 earthquakes with initial moment magnitudes (Mw) between 0.33 and 4.99 based on the pseudospectral acceleration and recover reasonable M0, f0, and κ for 87 earthquakes with Mw 1.83–3.51 determined by spectral fit. We use M0 and f0 to estimate the Brune-type stress drop, assuming a circular fault and shear-wave velocity at the hypocentral depth of the event. Our observations suggest that stress drops range between 0.005 and 4.8 MPa with a median of 0.2 MPa (0.03–26.4 MPa with a median of 1.1 MPa for Madariaga-type), which is significantly lower than typical eastern United States intraplate events (>10 MPa). We find that stress drops correlate weakly with hypocentral depth and magnitude. Additionally, we find the stress drops increase with time after the mainshock, although temporal variation in stress drop is difficult to separate from spatial heterogeneity and changing event locations. The overall low median stress drop suggests that the fault segments may have been primed to fail as a result of high pore fluid pressures, likely related to nearby wastewater injection.

  11. Cut-and-connect of two antiparallel vortex tubes

    NASA Technical Reports Server (NTRS)

    Melander, Mogens V.; Hussain, Fazle

    1988-01-01

    Motivated by an early conjecture that vortex cut-and-connect plays a key role in mixing and production of turbulence, helicity and aerodynamic noise, the cross-linking of two antiparallel viscous vortex tubes via direct numerical simulation is studied. The Navier-Stokes equations are solved by a dealiased pseudo-spectral method with 64 cubed grid points in a periodic domain for initial Reynolds numbers Re up to 1000. The vortex tubes are given an initial sinusoidal perturbation to induce a collision and keep the two tubes pressed against each other as annihilation continues. Cross-sectional and wire plots of various properties depict three stages of evolution: (1) Inviscid induction causing vortex cores to first approach and form a contact zone with a dipole cross-section, and then to flatten and stretch; (2) Vorticity annihilation in the contact zone accompanied by bridging between the two vortices at both ends of the contact zone due to a collection of cross-linked vortex lines, now orthogonal to the initial vortex tubes. The direction of dipole advection in the contact zone reverses; and (3) Threading of the remnants of the original vortices in between the bridges as they pull apart. The crucial stage 2 is shown to be a simple consequence of vorticity annihilation in the contact zone, link-up of the un-annihilated parts of vortex lines, and stretching and advection by the vortex tube swirl of the cross-linked lines, which accumulate at stagnation points in front of the annihilating vortex dipole. It is claimed that bridging is the essence of any vorticity cross-linking and that annihilation is sustained by stretching of the dipole by the bridges. Vortex reconnection details are found to be insensitive to asymmetry. Modeling of the reconnection process is briefly examined. The 3D spatial details of scalar transport (at unity Schmidt number), enstrophy production, dissipation and helicity are also examined.

  12. 76 FR 21673 - Alternative Efficiency Determination Methods and Alternate Rating Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-18

    ... EERE-2011-BP-TP-00024] RIN 1904-AC46 Alternative Efficiency Determination Methods and Alternate Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of... and data related to the use of computer simulations, mathematical methods, and other alternative...

  13. An applicable method for efficiency estimation of operating tray distillation columns and its comparison with the methods utilized in HYSYS and Aspen Plus

    NASA Astrophysics Data System (ADS)

    Sadeghifar, Hamidreza

    2015-10-01

    Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.

  14. Electrosprayed chitosan nanoparticles: facile and efficient approach for bacterial transformation

    NASA Astrophysics Data System (ADS)

    Abyadeh, Morteza; Sadroddiny, Esmaeil; Ebrahimi, Ammar; Esmaeili, Fariba; Landi, Farzaneh Saeedi; Amani, Amir

    2017-12-01

    A rapid and efficient procedure for DNA transformation is a key prerequisite for successful cloning and genomic studies. While there are efforts to develop a facile method, so far obtained efficiencies for alternative methods have been unsatisfactory (i.e. 105-106 CFU/μg plasmid) compared with conventional method (up to 108 CFU/μg plasmid). In this work, for the first time, we prepared chitosan/pDNA nanoparticles by electrospraying methods to improve transformation process. Electrospray method was used for chitosan/pDNA nanoparticles production to investigate the non-competent bacterial transformation efficiency; besides, the effect of chitosan molecular weight, N/P ratio and nanoparticle size on non-competent bacterial transformation efficiency was evaluated too. The results showed that transformation efficiency increased with decreasing the molecular weight, N/P ratio and nanoparticles size. In addition, transformation efficiency of 1.7 × 108 CFU/μg plasmid was obtained with chitosan molecular weight, N/P ratio and nanoparticles size values of 30 kDa, 1 and 125 nm. Chitosan/pDNA electrosprayed nanoparticles were produced and the effect of molecular weight, N/P and size of nanoparticles on transformation efficiency was evaluated. In total, we present a facile and rapid method for bacterial transformation, which has comparable efficiency with the common method.

  15. Measuring Efficiency of Secondary Healthcare Providers in Slovenia

    PubMed Central

    Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej

    2017-01-01

    Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180

  16. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  17. Relative efficiency of anuran sampling methods in a restinga habitat (Jurubatiba, Rio de Janeiro, Brazil).

    PubMed

    Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V

    2004-11-01

    Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés

    The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of themore » intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.« less

  19. On the nature of regional seismic phases-III. The influence of crustal heterogeneity on the wavefield for subduction earthquakes: the 1985 Michoacan and 1995 Copala, Guerrero, Mexico earthquakes

    NASA Astrophysics Data System (ADS)

    Furumura, T.; Kennett, B. L. N.

    1998-12-01

    The most prominent feature of the regional seismic wavefield from about 150 to over 1000 km is usually the Lg phase. This arrival represents trapped S-wave propagation within the crust as a superposition of multiple reflections, and its amplitude is quite sensitive to the lateral variation in the crust along a propagation path. In an environment where the events occur in a subduction zone, such as the western coast of Mexico, quite complex influences on the character of the regional wavefield arise from the presence of the subduction zone. The great 1985 Michoacan earthquake (MW=8.1), which occurred in the Mexican subduction zone, was one of the most destructive earthquakes in modern history and its notable character was that at Mexico City, located over 350 km from the epicentre, there was strong ground shaking almost comparable to that in the epicentral region that lasted for several minutes. Considerable effort has been expended to explain the origin of the unusual observed waves that caused the severe damage in the capital city during the destructive earthquake. The nature of the propagation process in this region can be understood in part by using the detailed strong-motion records from the 1995 Copala, Guerrero (MW=7.4) earthquake near the coast to the south of Mexico City, which also had an enhanced amplitude in the Valley of Mexico. Numerical modelling of both P and S seismic waves in 2-D and 3-D heterogeneous crustal models for western Mexico using the pseudospectral method provides direct insight into the nature of the propagation processes through the use of sequences of snapshots of the wavefield and synthetic seismograms at the surface. A comparison of different models allows the influences of different aspects of the structure to be isolated. 2-D and 3-D modelling of the 1985 Michoacan and 1995 Copala earthquakes clearly demonstrates that the origin of the long duration of strong ground shaking comes from the Sn and Lg wave trains. These S-wave arrivals are produced efficiently from shallow subduction earthquakes and are strongly enhanced during their propagation within the laterally heterogeneous waveguide produced by the subduction of the Cocos Plate beneath the Mexican mainland. The amplitude and duration of the Lg coda is also strongly reinforced by transmission through the Mexican Volcanic Belt from the amplification of S waves in the low-velocity surficial layer associated withS-to-P conversions in the volcanic zone. The further amplification of the large and long Lg wave train impinging on the shallow structure in the basin of Mexico City, with very soft soil underlain by nearly rigid bedrock with a strong impedance contrast, gives rise to the destructive strong ground shaking from the Mexican subduction earthquakes.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The report is an overview of electric energy efficiency programs. It takes a concise look at what states are doing to encourage energy efficiency and how it impacts electric utilities. Energy efficiency programs began to be offered by utilities as a response to the energy crises of the 1970s. These regulatory-driven programs peaked in the early-1990s and then tapered off as deregulation took hold. Today, rising electricity prices, environmental concerns, and national security issues have renewed interest in increasing energy efficiency as an alternative to additional supply. In response, new methods for administering, managing, and delivering energy efficiency programs aremore » being implemented. Topics covered in the report include: Analysis of the benefits of energy efficiency and key methods for achieving energy efficiency; evaluation of the business drivers spurring increased energy efficiency; Discussion of the major barriers to expanding energy efficiency programs; evaluation of the economic impacts of energy efficiency; discussion of the history of electric utility energy efficiency efforts; analysis of the impact of energy efficiency on utility profits and methods for protecting profitability; Discussion of non-utility management of energy efficiency programs; evaluation of major methods to spur energy efficiency - systems benefit charges, resource planning, and resource standards; and, analysis of the alternatives for encouraging customer participation in energy efficiency programs.« less

  1. Comparative analysis of quantitative efficiency evaluation methods for transportation networks

    PubMed Central

    He, Yuxin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess’s Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified. PMID:28399165

  2. Comparative analysis of quantitative efficiency evaluation methods for transportation networks.

    PubMed

    He, Yuxin; Qin, Jin; Hong, Jian

    2017-01-01

    An effective evaluation of transportation network efficiency could offer guidance for the optimal control of urban traffic. Based on the introduction and related mathematical analysis of three quantitative evaluation methods for transportation network efficiency, this paper compares the information measured by them, including network structure, traffic demand, travel choice behavior and other factors which affect network efficiency. Accordingly, the applicability of various evaluation methods is discussed. Through analyzing different transportation network examples it is obtained that Q-H method could reflect the influence of network structure, traffic demand and user route choice behavior on transportation network efficiency well. In addition, the transportation network efficiency measured by this method and Braess's Paradox can be explained with each other, which indicates a better evaluation of the real operation condition of transportation network. Through the analysis of the network efficiency calculated by Q-H method, it can also be drawn that a specific appropriate demand is existed to a given transportation network. Meanwhile, under the fixed demand, both the critical network structure that guarantees the stability and the basic operation of the network and a specific network structure contributing to the largest value of the transportation network efficiency can be identified.

  3. Efficiencies of Dye-Sensitized Solar Cells using Ferritin-Encapsulated Quantum Dots with Various Staining Methods

    NASA Astrophysics Data System (ADS)

    Perez, Luis

    Dye-sensitized solar cells (DSSC) have the potential to replace traditional and cost-inefficient crystalline silicon or ruthenium solar cells. This can only be accomplished by optimizing DSSC's energy efficiency. One of the major components in a dye-sensitized solar cell is the porous layer of titanium dioxide. This layer is coated with a molecular dye that absorbs sunlight. The research conducted for this paper focuses on the different methods used to dye the porous TiO2 layer with ferritin-encapsulated quantum dots. Multiple anodes were dyed using a method known as SILAR which involves deposition through alternate immersion in two different solutions. The efficiencies of DSSCs with ferritin-encapsulated lead sulfide dye deposited using SILAR were subsequently compared against the efficiencies produced by cells using the traditional immersion method. It was concluded that both methods resulted in similar efficiencies (? .074%) however, the SILAR method dyed the TiO2 coating significantly faster than the immersion method. On a related note, our experiments concluded that conducting 2 SILAR cycles yields the highest possible efficiency for this particular binding method. National Science Foundation.

  4. High accurate interpolation of NURBS tool path for CNC machine tools

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Liu, Huan; Yuan, Songmei

    2016-09-01

    Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.

  5. Chapter 1: Introduction. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Michael; Haeri, Hossein; Reynolds, Arlis

    This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objectivemore » for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less

  7. Evaluation of Thermoelectric Devices by the Slope-Efficiency Method

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7837 ● SEP 2016 US Army Research Laboratory Evaluation of Thermoelectric Devices by the Slope-Efficiency Method by...Evaluation of Thermoelectric Devices by the Slope-Efficiency Method by Patrick J Taylor Sensors and Electron Devices Directorate, ARL Jay R...

  8. Chapter 4: Small Commercial and Residential Unitary and Split System HVAC Heating and Cooling Equipment-Efficiency Upgrade Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Jacobson, David; Metoyer, Jarred

    The specific measure described here involves improving the overall efficiency in air-conditioning systems as a whole (compressor, evaporator, condenser, and supply fan). The efficiency rating is expressed as the energy efficiency ratio (EER), seasonal energy efficiency ratio (SEER), and integrated energy efficiency ratio (IEER). The higher the EER, SEER or IEER, the more efficient the unit is.

  9. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  10. The Gassmann-Burgers Model to Simulate Seismic Waves at the Earth Crust And Mantle

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Poletto, Flavio; Farina, Biancamaria; Craglietto, Aronne

    2017-03-01

    The upper part of the crust shows generally brittle behaviour while deeper zones, including the mantle, may present ductile behaviour, depending on the pressure-temperature conditions; moreover, some parts are melted. Seismic waves can be used to detect these conditions on the basis of reflection and transmission events. Basically, from the elastic-plastic point of view the seismic properties (seismic velocity and density) depend on effective pressure and temperature. Confining and pore pressures have opposite effects on these properties, such that very small effective pressures (the presence of overpressured fluids) may substantially decrease the P- and S-wave velocities, mainly the latter, by opening of cracks and weakening of grain contacts. Similarly, high temperatures induce the same effect by partial melting. To model these effects, we consider a poro-viscoelastic model based on Gassmann equations and Burgers mechanical model to represent the properties of the rock frame and describe ductility in which deformation takes place by shear plastic flow. The Burgers elements allow us to model the effects of seismic attenuation, velocity dispersion and steady-state creep flow, respectively. The stiffness components of the brittle and ductile media depend on stress and temperature through the shear viscosity, which is obtained by the Arrhenius equation and the octahedral stress criterion. Effective pressure effects are taken into account in the dry-rock moduli using exponential functions whose parameters are obtained by fitting experimental data as a function of confining pressure. Since fluid effects are important, the density and bulk modulus of the saturating fluids (water and steam) are modeled using the equations provided by the NIST website, including supercritical behaviour. The theory allows us to obtain the phase velocity and quality factor as a function of depth and geological pressure and temperature as well as time frequency. We then obtain the PS and SH equations of motion recast in the velocity-stress formulation, including memory variables to avoid the computation of time convolutions. The equations correspond to isotropic anelastic and inhomogeneous media and are solved by a direct grid method based on the Runge-Kutta time stepping technique and the Fourier pseudospectral method. The algorithm is tested with success against known analytical solutions for different shear viscosities. An example shows how anomalous conditions of pressure and temperature can in principle be detected with seismic waves.

  11. Development of a novel and highly efficient method of isolating bacteriophages from water.

    PubMed

    Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang

    2017-08-01

    Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.

  12. Development of a GPU Compatible Version of the Fast Radiation Code RRTMG

    NASA Astrophysics Data System (ADS)

    Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.

    2012-12-01

    The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.

  13. Biological optimization systems for enhancing photosynthetic efficiency and methods of use

    DOEpatents

    Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim

    2012-11-06

    Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.

  14. 10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Alternative methods for determining energy efficiency or energy use. 429.70 Section 429.70 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION CERTIFICATION....70 Alternative methods for determining energy efficiency or energy use. (a) General. A manufacturer...

  15. 10 CFR 429.70 - Alternative methods for determining energy efficiency or energy use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Alternative methods for determining energy efficiency or energy use. 429.70 Section 429.70 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION CERTIFICATION....70 Alternative methods for determining energy efficiency or energy use. Link to an amendment...

  16. An efficient method for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    An efficient method of computation of the manipulator inertia matrix is presented. Using spatial notations, the method leads to the definition of the composite rigid-body spatial inertia, which is a spatial representation of the notion of augmented body. The previously proposed methods, the physical interpretations leading to their derivation, and their redundancies are analyzed. The proposed method achieves a greater efficiency by eliminating the redundancy in the intrinsic equations as well as by a better choice of coordinate frame for their projection. In this case, removing the redundancy leads to greater efficiency of the computation in both serial and parallel senses.

  17. A study of two subgrid-scale models and their effects on wake breakdown behind a wind turbine in uniform inflow

    NASA Astrophysics Data System (ADS)

    Martinez, Luis; Meneveau, Charles

    2014-11-01

    Large Eddy Simulations (LES) of the flow past a single wind turbine with uniform inflow have been performed. A goal of the simulations is to compare two turbulence subgrid-scale models and their effects in predicting the initial breakdown, transition and evolution of the wake behind the turbine. Prior works have often observed negligible sensitivities to subgrid-scale models. The flow is modeled using an in-house LES with pseudo-spectral discretization in horizontal planes and centered finite differencing in the vertical direction. Turbines are represented using the actuator line model. We compare the standard constant-coefficient Smagorinsky subgrid-scale model with the Lagrangian Scale Dependent Dynamic model (LSDM). The LSDM model predicts faster transition to turbulence in the wake, whereas the standard Smagorinsky model predicts significantly delayed transition. The specified Smagorinsky coefficient is larger than the dynamic one on average, increasing diffusion thus delaying transition. A second goal is to compare the resulting near-blade properties such as local aerodynamic forces from the LES with Blade Element Momentum Theory. Results will also be compared with those of the SOWFA package, the wind energy CFD framework from NREL. This work is supported by NSF (IGERT and IIA-1243482) and computations use XSEDE resources, and has benefitted from interactions with Dr. M. Churchfield of NREL.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vincenti, H.; Vay, J. -L.

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  19. 10 CFR 431.12 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... method or AEDM means, with respect to an electric motor, a method of calculating the total power loss and average full load efficiency. Average full load efficiency means the arithmetic mean of the full load efficiencies of a population of electric motors of duplicate design, where the full load efficiency of each...

  20. EPA’s Travel Efficiency Method (TEAM) AMPO Presentation

    EPA Pesticide Factsheets

    Presentation describes EPA’s Travel Efficiency Assessment Method (TEAM) assessing potential travel efficiency strategies for reducing travel activity and emissions, includes reduction estimates in Vehicle Miles Traveled in four different geographic areas.

  1. Data-Driven Benchmarking of Building Energy Efficiency Utilizing Statistical Frontier Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavousian, A; Rajagopal, R

    2014-01-01

    Frontier methods quantify the energy efficiency of buildings by forming an efficient frontier (best-practice technology) and by comparing all buildings against that frontier. Because energy consumption fluctuates over time, the efficiency scores are stochastic random variables. Existing applications of frontier methods in energy efficiency either treat efficiency scores as deterministic values or estimate their uncertainty by resampling from one set of measurements. Availability of smart meter data (repeated measurements of energy consumption of buildings) enables using actual data to estimate the uncertainty in efficiency scores. Additionally, existing applications assume a linear form for an efficient frontier; i.e.,they assume that themore » best-practice technology scales up and down proportionally with building characteristics. However, previous research shows that buildings are nonlinear systems. This paper proposes a statistical method called stochastic energy efficiency frontier (SEEF) to estimate a bias-corrected efficiency score and its confidence intervals from measured data. The paper proposes an algorithm to specify the functional form of the frontier, identify the probability distribution of the efficiency score of each building using measured data, and rank buildings based on their energy efficiency. To illustrate the power of SEEF, this paper presents the results from applying SEEF on a smart meter data set of 307 residential buildings in the United States. SEEF efficiency scores are used to rank individual buildings based on energy efficiency, to compare subpopulations of buildings, and to identify irregular behavior of buildings across different time-of-use periods. SEEF is an improvement to the energy-intensity method (comparing kWh/sq.ft.): whereas SEEF identifies efficient buildings across the entire spectrum of building sizes, the energy-intensity method showed bias toward smaller buildings. The results of this research are expected to assist researchers and practitioners compare and rank (i.e.,benchmark) buildings more robustly and over a wider range of building types and sizes. Eventually, doing so is expected to result in improved resource allocation in energy-efficiency programs.« less

  2. Chapter 20: Data Center IT Efficiency Measures Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Huang, Robert; Masanet, Eric

    This chapter focuses on IT measures in the data center and examines the techniques and analysis methods used to verify savings that result from improving the efficiency of two specific pieces of IT equipment: servers and data storage.

  3. Entanglement verification with detection efficiency mismatch

    NASA Astrophysics Data System (ADS)

    Zhang, Yanbao; Lütkenhaus, Norbert

    Entanglement is a necessary condition for secure quantum key distribution (QKD). When there is an efficiency mismatch between various detectors used in the QKD system, it is still an open problem how to verify entanglement. Here we present a method to address this problem, given that the detection efficiency mismatch is characterized and known. The method works without assuming an upper bound on the number of photons going to each threshold detector. Our results suggest that the efficiency mismatch affects the ability to verify entanglement: the larger the efficiency mismatch is, the smaller the set of entangled states that can be verified becomes. When there is no mismatch, our method can verify entanglement even if the method based on squashing maps [PRL 101, 093601 (2008)] fails.

  4. Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method.

    PubMed

    Leung, Denis H Y; Wang, You-Gan; Zhu, Min

    2009-07-01

    The method of generalized estimating equations (GEEs) provides consistent estimates of the regression parameters in a marginal regression model for longitudinal data, even when the working correlation model is misspecified (Liang and Zeger, 1986). However, the efficiency of a GEE estimate can be seriously affected by the choice of the working correlation model. This study addresses this problem by proposing a hybrid method that combines multiple GEEs based on different working correlation models, using the empirical likelihood method (Qin and Lawless, 1994). Analyses show that this hybrid method is more efficient than a GEE using a misspecified working correlation model. Furthermore, if one of the working correlation structures correctly models the within-subject correlations, then this hybrid method provides the most efficient parameter estimates. In simulations, the hybrid method's finite-sample performance is superior to a GEE under any of the commonly used working correlation models and is almost fully efficient in all scenarios studied. The hybrid method is illustrated using data from a longitudinal study of the respiratory infection rates in 275 Indonesian children.

  5. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  6. Multi-fidelity stochastic collocation method for computation of statistical moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  7. An evaluation of the efficiency of cleaning methods in a bacon factory

    PubMed Central

    Dempster, J. F.

    1971-01-01

    The germicidal efficiencies of hot water (140-150° F.) under pressure (method 1), hot water + 2% (w/v) detergent solution (method 2) and hot water + detergent + 200 p.p.m. solution of available chlorine (method 3) were compared at six sites in a bacon factory. Results indicated that sites 1 and 2 (tiled walls) were satisfactorily cleaned by each method. It was therefore considered more economical to clean such surfaces routinely by method 1. However, this method was much less efficient (31% survival of micro-organisms) on site 3 (wooden surface) than methods 2 (7% survival) and 3 (1% survival). Likewise the remaining sites (dehairing machine, black scraper and table) were least efficiently cleaned by method 1. The most satisfactory results were obtained when these surfaces were treated by method 3. Pig carcasses were shown to be contaminated by an improperly cleaned black scraper. Repeated cleaning and sterilizing (method 3) of this equipment reduced the contamination on carcasses from about 70% to less than 10%. PMID:5291745

  8. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  9. Division of methods for counting helminths' eggs and the problem of efficiency of these methods.

    PubMed

    Jaromin-Gleń, Katarzyna; Kłapeć, Teresa; Łagód, Grzegorz; Karamon, Jacek; Malicki, Jacek; Skowrońska, Agata; Bieganowski, Andrzej

    2017-03-21

    From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths' eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the 'Split/Spike' method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the 'actual' number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.

  10. Assessing Regional Emissions Reductions from Travel Efficiency: Applying the Travel Efficiency Assessment Method

    EPA Pesticide Factsheets

    This presentation from the 2016 TRB Summer Conference on Transportation Planning and Air Quality summarizes the application of the Travel Efficiency Assessment Method (TEAM) which analyzed selected transportation emission reduction strategies in three case

  11. 10 CFR 431.324 - Uniform test method for the measurement of energy efficiency of metal halide ballasts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... efficiency of metal halide ballasts. 431.324 Section 431.324 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.324 Uniform test method for the measurement of energy efficiency of metal...

  12. Efficiency of personal dosimetry methods in vascular interventional radiology.

    PubMed

    Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de

    2017-05-01

    The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  14. Towards a Highly Efficient Meshfree Simulation of Non-Newtonian Free Surface Ice Flow: Application to the Haut Glacier d'Arolla

    NASA Astrophysics Data System (ADS)

    Shcherbakov, V.; Ahlkrona, J.

    2016-12-01

    In this work we develop a highly efficient meshfree approach to ice sheet modeling. Traditionally mesh based methods such as finite element methods are employed to simulate glacier and ice sheet dynamics. These methods are mature and well developed. However, despite of numerous advantages these methods suffer from some drawbacks such as necessity to remesh the computational domain every time it changes its shape, which significantly complicates the implementation on moving domains, or a costly assembly procedure for nonlinear problems. We introduce a novel meshfree approach that frees us from all these issues. The approach is built upon a radial basis function (RBF) method that, thanks to its meshfree nature, allows for an efficient handling of moving margins and free ice surface. RBF methods are also accurate and easy to implement. Since the formulation is stated in strong form it allows for a substantial reduction of the computational cost associated with the linear system assembly inside the nonlinear solver. We implement a global RBF method that defines an approximation on the entire computational domain. This method exhibits high accuracy properties. However, it suffers from a disadvantage that the coefficient matrix is dense, and therefore the computational efficiency decreases. In order to overcome this issue we also implement a localized RBF method that rests upon a partition of unity approach to subdivide the domain into several smaller subdomains. The radial basis function partition of unity method (RBF-PUM) inherits high approximation characteristics form the global RBF method while resulting in a sparse system of equations, which essentially increases the computational efficiency. To demonstrate the usefulness of the RBF methods we model the velocity field of ice flow in the Haut Glacier d'Arolla. We assume that the flow is governed by the nonlinear Blatter-Pattyn equations. We test the methods for different basal conditions and for a free moving surface. Both RBF methods are compared with a classical finite element method in terms of accuracy and efficiency. We find that the RBF methods are more efficient than the finite element method and well suited for ice dynamics modeling, especially the partition of unity approach.

  15. 10 CFR 431.107 - Uniform test method for the measurement of energy efficiency of commercial heat pump water...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF....107 Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters...

  16. 10 CFR 431.107 - Uniform test method for the measurement of energy efficiency of commercial heat pump water...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF....107 Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters...

  17. 10 CFR 431.107 - Uniform test method for the measurement of energy efficiency of commercial heat pump water...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF....107 Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters...

  18. Calorimetric Measurement for Internal Conversion Efficiency of Photovoltaic Cells/Modules Based on Electrical Substitution Method

    NASA Astrophysics Data System (ADS)

    Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato

    2018-02-01

    We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.

  19. A rapid, highly efficient and economical method of Agrobacterium-mediated in planta transient transformation in living onion epidermis.

    PubMed

    Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei

    2014-01-01

    Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale.

  20. Design of compact and ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry

    PubMed Central

    Wu, Rengmao; Hua, Hong; Benítez, Pablo; Miñano, Juan C.; Liang, Rongguang

    2016-01-01

    The energy efficiency and compactness of an illumination system are two main concerns in illumination design for extended sources. In this paper, we present two methods to design compact, ultra efficient aspherical lenses for extended Lambertian sources in two-dimensional geometry. The light rays are directed by using two aspherical surfaces in the first method and one aspherical surface along with an optimized parabola in the second method. The principles and procedures of each design method are introduced in detail. Three examples are presented to demonstrate the effectiveness of these two methods in terms of performance and capacity in designing compact, ultra efficient aspherical lenses. The comparisons made between the two proposed methods indicate that the second method is much simpler and easier to be implemented, and has an excellent extensibility to three-dimensional designs. PMID:29092336

  1. An approximate solution to improve computational efficiency of impedance-type payload load prediction

    NASA Technical Reports Server (NTRS)

    White, C. W.

    1981-01-01

    The computational efficiency of the impedance type loads prediction method was studied. Three goals were addressed: devise a method to make the impedance method operate more efficiently in the computer; assess the accuracy and convenience of the method for determining the effect of design changes; and investigate the use of the method to identify design changes for reduction of payload loads. The method is suitable for calculation of dynamic response in either the frequency or time domain. It is concluded that: the choice of an orthogonal coordinate system will allow the impedance method to operate more efficiently in the computer; the approximate mode impedance technique is adequate for determining the effect of design changes, and is applicable for both statically determinate and statically indeterminate payload attachments; and beneficial design changes to reduce payload loads can be identified by the combined application of impedance techniques and energy distribution review techniques.

  2. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  3. 77 FR 32038 - Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ... Determination Methods and Alternative Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy... proposing to revise and expand its existing regulations governing the use of particular methods as...- TP-0024, by any of the following methods: Email: to AED/[email protected] . Include EERE...

  4. A modified indirect mathematical model for evaluation of ethanol production efficiency in industrial-scale continuous fermentation processes.

    PubMed

    Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M

    2016-10-01

    To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.

  5. Compatibility of Segments of Thermoelectric Generators

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey; Ursell, Tristan

    2009-01-01

    A method of calculating (usually for the purpose of maximizing) the power-conversion efficiency of a segmented thermoelectric generator is based on equations derived from the fundamental equations of thermoelectricity. Because it is directly traceable to first principles, the method provides physical explanations in addition to predictions of phenomena involved in segmentation. In comparison with the finite-element method used heretofore to predict (without being able to explain) the behavior of a segmented thermoelectric generator, this method is much simpler to implement in practice: in particular, the efficiency of a segmented thermoelectric generator can be estimated by evaluating equations using only hand-held calculator with this method. In addition, the method provides for determination of cascading ratios. The concept of cascading is illustrated in the figure and the definition of the cascading ratio is defined in the figure caption. An important aspect of the method is its approach to the issue of compatibility among segments, in combination with introduction of the concept of compatibility within a segment. Prior approaches involved the use of only averaged material properties. Two materials in direct contact could be examined for compatibility with each other, but there was no general framework for analysis of compatibility. The present method establishes such a framework. The mathematical derivation of the method begins with the definition of reduced efficiency of a thermoelectric generator as the ratio between (1) its thermal-to-electric power-conversion efficiency and (2) its Carnot efficiency (the maximum efficiency theoretically attainable, given its hot- and cold-side temperatures). The derivation involves calculation of the reduced efficiency of a model thermoelectric generator for which the hot-side temperature is only infinitesimally greater than the cold-side temperature. The derivation includes consideration of the ratio (u) between the electric current and heat-conduction power and leads to the concept of compatibility factor (s) for a given thermoelectric material, defined as the value of u that maximizes the reduced efficiency of the aforementioned model thermoelectric generator.

  6. Combustor kinetic energy efficiency analysis of the hypersonic research engine data

    NASA Astrophysics Data System (ADS)

    Hoose, K. V.

    1993-11-01

    A one-dimensional method for measuring combustor performance is needed to facilitate design and development scramjet engines. A one-dimensional kinetic energy efficiency method is used for measuring inlet and nozzle performance. The objective of this investigation was to assess the use of kinetic energy efficiency as an indicator for scramjet combustor performance. A combustor kinetic energy efficiency analysis was performed on the Hypersonic Research Engine (HRE) data. The HRE data was chosen for this analysis due to its thorough documentation and availability. The combustor, inlet, and nozzle kinetic energy efficiency values were utilized to determine an overall engine kinetic energy efficiency. Finally, a kinetic energy effectiveness method was developed to eliminate thermochemical losses from the combustion of fuel and air. All calculated values exhibit consistency over the flight speed range. Effects from fuel injection, altitude, angle of attack, subsonic-supersonic combustion transition, and inlet spike position are shown and discussed. The results of analyzing the HRE data indicate that the kinetic energy efficiency method is effective as a measure of scramjet combustor performance.

  7. A flow calorimeter for determining combustion efficiency from residual enthalpy of exhaust gases

    NASA Technical Reports Server (NTRS)

    Evans, Albert; Hibbard, Robert R

    1954-01-01

    A flow calorimeter for determining the combustion efficiency of turbojet and ram-jet combustors from measurement of the residual enthalpy of combustion of the exhaust gas is described. Briefly, the calorimeter catalytically oxidizes the combustible constituents of exhaust-gas samples, and the resultant temperature rise is measured. This temperature rise is related to the residual enthalpy of combustion of the sample by previous calibration of the calorimeter. Combustion efficiency can be calculated from a knowledge of the residual enthalpy of the exhaust gas and the combustor input enthalpy. An accuracy of +-0.2 Btu per cubic foot was obtained with prepared fuel-air mixtures, and the combustion efficiencies of single turbojet combustors measured by both the flow-calorimeter and heat-balance methods compared within 3 percentage units. Flow calorimetry appears to be a suitable method for determining combustion efficiencies at high combustor temperatures where ordinary thermocouples cannot be used. The method is fundamentally more accurate than heat-balance methods at high combustion efficiencies and can be used to verify near-100-percent efficiency data.

  8. A conjugate gradient method with descent properties under strong Wolfe line search

    NASA Astrophysics Data System (ADS)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  9. Comparison of Relative Bias, Precision, and Efficiency of Sampling Methods for Natural Enemies of Soybean Aphid (Hemiptera: Aphididae).

    PubMed

    Bannerman, J A; Costamagna, A C; McCornack, B P; Ragsdale, D W

    2015-06-01

    Generalist natural enemies play an important role in controlling soybean aphid, Aphis glycines (Hemiptera: Aphididae), in North America. Several sampling methods are used to monitor natural enemy populations in soybean, but there has been little work investigating their relative bias, precision, and efficiency. We compare five sampling methods: quadrats, whole-plant counts, sweep-netting, walking transects, and yellow sticky cards to determine the most practical methods for sampling the three most prominent species, which included Harmonia axyridis (Pallas), Coccinella septempunctata L. (Coleoptera: Coccinellidae), and Orius insidiosus (Say) (Hemiptera: Anthocoridae). We show an important time by sampling method interaction indicated by diverging community similarities within and between sampling methods as the growing season progressed. Similarly, correlations between sampling methods for the three most abundant species over multiple time periods indicated differences in relative bias between sampling methods and suggests that bias is not consistent throughout the growing season, particularly for sticky cards and whole-plant samples. Furthermore, we show that sticky cards produce strongly biased capture rates relative to the other four sampling methods. Precision and efficiency differed between sampling methods and sticky cards produced the most precise (but highly biased) results for adult natural enemies, while walking transects and whole-plant counts were the most efficient methods for detecting coccinellids and O. insidiosus, respectively. Based on bias, precision, and efficiency considerations, the most practical sampling methods for monitoring in soybean include walking transects for coccinellid detection and whole-plant counts for detection of small predators like O. insidiosus. Sweep-netting and quadrat samples are also useful for some applications, when efficiency is not paramount. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann

    The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…

  11. An Efficient Numerical Method for Computing Synthetic Seismograms for a Layered Half-space with Sources and Receivers at Close or Same Depths

    NASA Astrophysics Data System (ADS)

    Zhang, H.-m.; Chen, X.-f.; Chang, S.

    - It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.

  12. System and method to determine electric motor efficiency using an equivalent circuit

    DOEpatents

    Lu, Bin; Habetler, Thomas G.

    2015-10-27

    A system and method for determining electric motor efficiency includes a monitoring system having a processor programmed to determine efficiency of an electric motor under load while the electric motor is online. The determination of motor efficiency is independent of a rotor speed measurement. Further, the efficiency is based on a determination of stator winding resistance, an input voltage, and an input current. The determination of the stator winding resistance occurs while the electric motor under load is online.

  13. System and method to determine electric motor efficiency using an equivalent circuit

    DOEpatents

    Lu, Bin [Kenosha, WI; Habetler, Thomas G [Snellville, GA

    2011-06-07

    A system and method for determining electric motor efficiency includes a monitoring system having a processor programmed to determine efficiency of an electric motor under load while the electric motor is online. The determination of motor efficiency is independent of a rotor speed measurement. Further, the efficiency is based on a determination of stator winding resistance, an input voltage, and an input current. The determination of the stator winding resistance occurs while the electric motor under load is online.

  14. Fast optimization of binary clusters using a novel dynamic lattice searching method.

    PubMed

    Wu, Xia; Cheng, Wen

    2014-09-28

    Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd)79 clusters with DFT-fit parameters of Gupta potential.

  15. Numerical Simulation of the Variation of Schumann Resonance Associated with Seismogenic Processe in the Lithosphere-Atmosphere-Ionosphere system

    NASA Astrophysics Data System (ADS)

    Liu, L.; Huang, Q.; Wang, Y.

    2012-12-01

    The variations in the strength and frequency shift of the Schumann resonance (SR) of the electromagnetic (EM) field prior to some significance earthquakes were reported by a number of researchers. As a robust physical phenomenon constantly exists in the resonant cavity formed by the lithosphere-atmosphere-ionosphere system, irregular variations in SR parameters can be naturally attributed to be the potential precursory observables for forecasting earthquake occurrences. Schumann resonance (SR) of the EM field between the lithosphere and the ionosphere occurs because the space between the surface of the Earth and the conductive ionosphere acts as a closed waveguide. The cavity is naturally excited by electric currents generated by lightning. SR is the principal background in the electromagnetic spectrum at extremely low frequencies (ELF) between 3-69 Hz. We simulated the EM field in the lithosphere-ionosphere waveguide with a 2-dimensional (2D), cylindrical whole-earth model by the hybrid pseudo-spectral and finite difference time domain method. Considering the seismogensis as a fully coupled seismoelectric process, we simulate the seismic wave and EM wave in this 2D model. The excitation of SR in the background EM field are generated by the electric-current impulses due to lightning thunderstorms within the lowest 10 kilometers of the atmosphere . The diurnal variation and the latitude-dependence in ion concentration in the ionosphere are included in the model. After the SR has reached the steady state, the impulse generated by the seismogenic process (pre-, co- and post-seismic) in the crust is introduced to assess the possible precursory effects on SR strength and frequency. The modeling results explain the observed fact of why SR has a much more sensitive response to continental earthquakes, and much less response to oceanic events; the reason is simply due to the shielding effect of the conductive ocean that prevents effective radiation of the seismoelectric signals into the lithosphere- ionosphere waveguide.; Resonance cavity model formed by the lithosphere-atmosphere-ionosphere system (illustrative, not to the scale of the Earth).

  16. Gaussian vs non-Gaussian turbulence: impact on wind turbine loads

    NASA Astrophysics Data System (ADS)

    Berg, J.; Mann, J.; Natarajan, A.; Patton, E. G.

    2014-12-01

    In wind energy applications the turbulent velocity field of the Atmospheric Boundary Layer (ABL) is often characterised by Gaussian probability density functions. When estimating the dynamical loads on wind turbines this has been the rule more than anything else. From numerous studies in the laboratory, in Direct Numerical Simulations, and from in-situ measurements of the ABL we know, however, that turbulence is not purely Gaussian: the smallest and fastest scales often exhibit extreme behaviour characterised by strong non-Gaussian statistics. In this contribution we want to investigate whether these non-Gaussian effects are important when determining wind turbine loads, and hence of utmost importance to the design criteria and lifetime of a wind turbine. We devise a method based on Principal Orthogonal Decomposition where non-Gaussian velocity fields generated by high-resolution pseudo-spectral Large-Eddy Simulation (LES) of the ABL are transformed so that they maintain the exact same second-order statistics including variations of the statistics with height, but are otherwise Gaussian. In that way we can investigate in isolation the question whether it is important for wind turbine loads to include non-Gaussian properties of atmospheric turbulence. As an illustration the Figure show both a non-Gaussian velocity field (left) from our LES, and its transformed Gaussian Counterpart (right). Whereas the horizontal velocity components (top) look close to identical, the vertical components (bottom) are not: the non-Gaussian case is much more fluid-like (like in a sketch by Michelangelo). The question is then: Does the wind turbine see this? Using the load simulation software HAWC2 with both the non-Gaussian and newly constructed Gaussian fields, respectively, we show that the Fatigue loads and most of the Extreme loads are unaltered when using non-Gaussian velocity fields. The turbine thus acts like a low-pass filter which average out the non-Gaussian behaviour on time scales close to and faster than the revolution time of the turbine. For a few of the Extreme load estimations there is, on the other hand, a tendency that non-Gaussian effects increase the overall dynamical load, and hence can be of importance in wind energy load estimations.

  17. Ground motions from induced earthquakes in Oklahoma and Kansas and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Rennolet, S.; Thompson, E.; Yeck, W.; McNamara, D. E.; Herrmann, R. B.; Powers, P.; Hoover, S. M.

    2016-12-01

    Recent efforts to characterize the seismic hazard resulting from increased seismicity rates in Oklahoma and Kansas highlight the need for a regionalized ground motion characterization. To support these efforts, we measure and compile strong ground motions and compare these average ground motions intensity measures (IMs) with existing ground motion prediction equations (GMPEs). IMs are computed for available broadband and strong-motion records from M≥3 earthquakes occurring January 2009-April 2016, using standard strong motion processing guidelines. We verified our methods by comparing results from specific earthquakes to other standard procedures such as the USGS Shakemap system. The large number of records required an automated processing scheme, which was complicated by the extremely high rate of small-magnitude earthquakes 2014-2016. Orientation-independent IMs include peak ground motions (acceleration and velocity) and pseudo-spectral accelerations (5 percent damping, 0.1-10 s period). Metadata for the records included relocated event hypocenters. The database includes more than 160,000 records from about 3200 earthquakes. Estimates of the mean and standard deviation of the IMs are computed by distance binning at intervals of 2 km. Mean IMs exhibit a clear break in geometrical attenuation at epicentral distances of about 50-70 km, which is consistent with previous studies in the CEUS. Comparisons of these ground motions with modern GMPEs provide some insight into the relative IMs of induced earthquakes in Oklahoma and Kansas relative to the western U.S. and the central and eastern U.S. The site response for these stations is uncertain because very little is known about shallow seismic velocity in the region, and we make no attempt to correct observed IMs to a reference site conditions. At close distances, the observed IMs are lower than the predictions of the seed GMPEs of the NGA-East project (and about consistent with NGA-West-2 ground motions). This ground motion database may be used to inform future seismic hazard forecast models and in the development of regionally appropriate GMPEs.

  18. 10 CFR 430.3 - Materials incorporated by reference.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Energy Efficiency and Renewable Energy, Building Technologies Program, 6th Floor, 950 L'Enfant Plaza, SW... B. (8) ASHRAE 103-1993, Methods of Testing for Annual Fuel Utilization Efficiency of Residential...) ASHRAE 116-1995 (RA 2005), Methods of Testing for Rating Seasonal Efficiency of Unitary Air Conditioners...

  19. Method of optimizing performance of Rankine cycle power plants

    DOEpatents

    Pope, William L.; Pines, Howard S.; Doyle, Padraic A.; Silvester, Lenard F.

    1982-01-01

    A method for efficiently operating a Rankine cycle power plant (10) to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine (22) fluid inlet state which is substantially in the area adjacent and including the transposed critical temperature line (46).

  20. Comparative Efficiency of the Fenwick Can and Schuiling Centrifuge in Extracting Nematode Cysts from Different Soil Types

    PubMed Central

    Bellvert, Joaquim; Crombie, Kieran; Horgan, Finbarr G.

    2008-01-01

    The Fenwick can and Schuiling centrifuge are widely used to extract nematode cysts from soil samples. The comparative efficiencies of these two methods during cyst extraction have not been determined for different soil types under different cyst densities. Such information is vital for statutory laboratories that must choose a method for routine, high-throughput soil monitoring. In this study, samples of different soil types seeded with varying densities of potato cyst nematode (Globodera rostochiensis) cysts were processed using both methods. In one experiment, with 200 ml samples, recovery was similar between methods. In a second experiment with 500 ml samples, cyst recovery was higher using the Schuiling centrifuge. For each method and soil type, cyst extraction efficiency was similar across all densities tested. Extraction was efficient from pure sand (Fenwick 72%, Schuiling 84%) and naturally sandy soils (Fenwick 62%, Schuiling 73%), but was significantly less efficient from clay-soil (Fenwick 42%, Schuiling 44%) and peat-soil with high organic matter content (Fenwick 35%, Schuiling 33%). Residual moisture (<10% w/w) in samples prior to analyses reduced extraction efficiency, particularly for sand and sandy soils. For each soil type and method, there were significant linear relationships between the number of cysts extracted and the numbers of cysts in the samples. We discuss the advantages and disadvantages of each extraction method for cyst extraction in statutory soil laboratories. PMID:19259516

  1. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  2. Application of the conjugate-gradient method to ground-water models

    USGS Publications Warehouse

    Manteuffel, T.A.; Grove, D.B.; Konikow, Leonard F.

    1984-01-01

    The conjugate-gradient method can solve efficiently and accurately finite-difference approximations to the ground-water flow equation. An aquifer-simulation model using the conjugate-gradient method was applied to a problem of ground-water flow in an alluvial aquifer at the Rocky Mountain Arsenal, Denver, Colorado. For this application, the accuracy and efficiency of the conjugate-gradient method compared favorably with other available methods for steady-state flow. However, its efficiency relative to other available methods depends on the nature of the specific problem. The main advantage of the conjugate-gradient method is that it does not require the use of iteration parameters, thereby eliminating this partly subjective procedure. (USGS)

  3. A Rapid, Highly Efficient and Economical Method of Agrobacterium-Mediated In planta Transient Transformation in Living Onion Epidermis

    PubMed Central

    Xu, Kedong; Huang, Xiaohui; Wu, Manman; Wang, Yan; Chang, Yunxia; Liu, Kun; Zhang, Ju; Zhang, Yi; Zhang, Fuli; Yi, Liming; Li, Tingting; Wang, Ruiyue; Tan, Guangxuan; Li, Chengwei

    2014-01-01

    Transient transformation is simpler, more efficient and economical in analyzing protein subcellular localization than stable transformation. Fluorescent fusion proteins were often used in transient transformation to follow the in vivo behavior of proteins. Onion epidermis, which has large, living and transparent cells in a monolayer, is suitable to visualize fluorescent fusion proteins. The often used transient transformation methods included particle bombardment, protoplast transfection and Agrobacterium-mediated transformation. Particle bombardment in onion epidermis was successfully established, however, it was expensive, biolistic equipment dependent and with low transformation efficiency. We developed a highly efficient in planta transient transformation method in onion epidermis by using a special agroinfiltration method, which could be fulfilled within 5 days from the pretreatment of onion bulb to the best time-point for analyzing gene expression. The transformation conditions were optimized to achieve 43.87% transformation efficiency in living onion epidermis. The developed method has advantages in cost, time-consuming, equipment dependency and transformation efficiency in contrast with those methods of particle bombardment in onion epidermal cells, protoplast transfection and Agrobacterium-mediated transient transformation in leaf epidermal cells of other plants. It will facilitate the analysis of protein subcellular localization on a large scale. PMID:24416168

  4. Hybrid ODE/SSA methods and the cell cycle model

    NASA Astrophysics Data System (ADS)

    Wang, S.; Chen, M.; Cao, Y.

    2017-07-01

    Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.

  5. Design method of high-efficient 
LED headlamp lens.

    PubMed

    Chen, Fei; Wang, Kai; Qin, Zong; Wu, Dan; Luo, Xiaobing; Liu, Sheng

    2010-09-27

    Low optical efficiency of light-emitting diode (LED) based headlamp is one of the most important issues to obstruct applications of LEDs in headlamp. An effective high-efficient LED headlamp freeform lens design method is introduced in this paper. A low-beam lens and a high-beam lens for LED headlamp are designed according to this method. Monte Carlo ray tracing simulation results demonstrate that the LED headlamp with these two lenses can fully comply with the ECE regulation without any other lens or reflector. Moreover, optical efficiencies of both these two lenses are more than 88% in theory.

  6. Method of optimizing performance of Rankine cycle power plants. [US DOE Patent

    DOEpatents

    Pope, W.L.; Pines, H.S.; Doyle, P.A.; Silvester, L.F.

    1980-06-23

    A method is described for efficiently operating a Rankine cycle power plant to maximize fuel utilization efficiency or energy conversion efficiency or minimize costs by selecting a turbine fluid inlet state which is substantially on the area adjacent and including the transposed critical temperature line.

  7. 10 CFR 430.3 - Materials incorporated by reference.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... is available for inspection at U.S. Department of Energy, Office of Energy Efficiency and Renewable... appendix M to subpart B. (9) ASHRAE 103-1993, Methods of Testing for Annual Fuel Utilization Efficiency of... subpart B. (10) ASHRAE 116-1995 (RA 2005), Methods of Testing for Rating Seasonal Efficiency of Unitary...

  8. 10 CFR 430.3 - Materials incorporated by reference.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    .... Department of Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies Program, 6th... appendix M to subpart B. (9) ASHRAE 103-1993, Methods of Testing for Annual Fuel Utilization Efficiency of... subpart B. (10) ASHRAE 116-1995 (RA 2005), Methods of Testing for Rating Seasonal Efficiency of Unitary...

  9. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  10. Methods comparison for microsatellite marker development: Different isolation methods, different yield efficiency

    NASA Astrophysics Data System (ADS)

    Zhan, Aibin; Bao, Zhenmin; Hu, Xiaoli; Lu, Wei; Hu, Jingjie

    2009-06-01

    Microsatellite markers have become one kind of the most important molecular tools used in various researches. A large number of microsatellite markers are required for the whole genome survey in the fields of molecular ecology, quantitative genetics and genomics. Therefore, it is extremely necessary to select several versatile, low-cost, efficient and time- and labor-saving methods to develop a large panel of microsatellite markers. In this study, we used Zhikong scallop ( Chlamys farreri) as the target species to compare the efficiency of the five methods derived from three strategies for microsatellite marker development. The results showed that the strategy of constructing small insert genomic DNA library resulted in poor efficiency, while the microsatellite-enriched strategy highly improved the isolation efficiency. Although the mining public database strategy is time- and cost-saving, it is difficult to obtain a large number of microsatellite markers, mainly due to the limited sequence data of non-model species deposited in public databases. Based on the results in this study, we recommend two methods, microsatellite-enriched library construction method and FIASCO-colony hybridization method, for large-scale microsatellite marker development. Both methods were derived from the microsatellite-enriched strategy. The experimental results obtained from Zhikong scallop also provide the reference for microsatellite marker development in other species with large genomes.

  11. The method of constant stimuli is inefficient

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Fitzhugh, Andrew

    1990-01-01

    Simpson (1988) has argued that the method of constant stimuli is as efficient as adaptive methods of threshold estimation and has supported this claim with simulations. It is shown that Simpson's simulations are not a reasonable model of the experimental process and that more plausible simulations confirm that adaptive methods are much more efficient that the method of constant stimuli.

  12. Increasing the volumetric efficiency of Diesel engines by intake pipes

    NASA Technical Reports Server (NTRS)

    List, Hans

    1933-01-01

    Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.

  13. A method for determining the conversion efficiency of multiple-cell photovoltaic devices

    NASA Astrophysics Data System (ADS)

    Glatfelter, Troy; Burdick, Joseph

    A method for accurately determining the conversion efficiency of any multiple-cell photovoltaic device under any arbitrary reference spectrum is presented. This method makes it possible to obtain not only the short-circuit current, but also the fill factor, the open-circuit voltage, and hence the conversion efficiency of a multiple-cell device under any reference spectrum. Results are presented which allow a comparison of the I-V parameters of two-terminal, two- and three-cell tandem devices measured under a multiple-source simulator with the same parameters measured under different reference spectra. It is determined that the uncertainty in the conversion efficiency of a multiple-cell photovoltaic device obtained with this method is less than +/-3 percent.

  14. Efficient Testing Combining Design of Experiment and Learn-to-Fly Strategies

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Brandon, Jay M.

    2017-01-01

    Rapid modeling and efficient testing methods are important in a number of aerospace applications. In this study efficient testing strategies were evaluated in a wind tunnel test environment and combined to suggest a promising approach for both ground-based and flight-based experiments. Benefits of using Design of Experiment techniques, well established in scientific, military, and manufacturing applications are evaluated in combination with newly developing methods for global nonlinear modeling. The nonlinear modeling methods, referred to as Learn-to-Fly methods, utilize fuzzy logic and multivariate orthogonal function techniques that have been successfully demonstrated in flight test. The blended approach presented has a focus on experiment design and identifies a sequential testing process with clearly defined completion metrics that produce increased testing efficiency.

  15. Efficient discovery of risk patterns in medical data.

    PubMed

    Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul

    2009-01-01

    This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.

  16. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  17. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  18. Ranking of options of real estate use by expert assessments mathematical processing

    NASA Astrophysics Data System (ADS)

    Lepikhina, O. Yu; Skachkova, M. E.; Mihaelyan, T. A.

    2018-05-01

    The article is devoted to the development of the real estate assessment concept. In conditions of multivariate using of the real estate method based on calculating, the integral indicator of each variant’s efficiency is proposed. In order to calculate weights of criteria of the efficiency expert method, Analytic hierarchy process and its mathematical support are used. The method allows fulfilling ranking of alternative types of real estate use in dependence of their efficiency. The method was applied for one of the land parcels located on Primorsky district in Saint Petersburg.

  19. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  20. Comparative study on antibody immobilization strategies for efficient circulating tumor cell capture.

    PubMed

    Ates, Hatice Ceren; Ozgur, Ebru; Kulah, Haluk

    2018-03-23

    Methods for isolation and quantification of circulating tumor cells (CTCs) are attracting more attention every day, as the data for their unprecedented clinical utility continue to grow. However, the challenge is that CTCs are extremely rare (as low as 1 in a billion of blood cells) and a highly sensitive and specific technology is required to isolate CTCs from blood cells. Methods utilizing microfluidic systems for immunoaffinity-based CTC capture are preferred, especially when purity is the prime requirement. However, antibody immobilization strategy significantly affects the efficiency of such systems. In this study, two covalent and two bioaffinity antibody immobilization methods were assessed with respect to their CTC capture efficiency and selectivity, using an anti-epithelial cell adhesion molecule (EpCAM) as the capture antibody. Surface functionalization was realized on plain SiO 2 surfaces, as well as in microfluidic channels. Surfaces functionalized with different antibody immobilization methods are physically and chemically characterized at each step of functionalization. MCF-7 breast cancer and CCRF-CEM acute lymphoblastic leukemia cell lines were used as EpCAM positive and negative cell models, respectively, to assess CTC capture efficiency and selectivity. Comparisons reveal that bioaffinity based antibody immobilization involving streptavidin attachment with glutaraldehyde linker gave the highest cell capture efficiency. On the other hand, a covalent antibody immobilization method involving direct antibody binding by N-(3-dimethylaminopropyl)-N'-ethylcarbodiimide hydrochloride (EDC)-N-hydroxysuccinimide (NHS) reaction was found to be more time and cost efficient with a similar cell capture efficiency. All methods provided very high selectivity for CTCs with EpCAM expression. It was also demonstrated that antibody immobilization via EDC-NHS reaction in a microfluidic channel leads to high capture efficiency and selectivity.

  1. Nonlinear dynamics and anisotropic structure of rotating sheared turbulence.

    PubMed

    Salhi, A; Jacobitz, F G; Schneider, K; Cambon, C

    2014-01-01

    Homogeneous turbulence in rotating shear flows is studied by means of pseudospectral direct numerical simulation and analytical spectral linear theory (SLT). The ratio of the Coriolis parameter to shear rate is varied over a wide range by changing the rotation strength, while a constant moderate shear rate is used to enable significant contributions to the nonlinear interscale energy transfer and to the nonlinear intercomponental redistribution terms. In the destabilized and neutral cases, in the sense of kinetic energy evolution, nonlinearity cannot saturate the growth of the largest scales. It permits the smallest scale to stabilize by a scale-by-scale quasibalance between the nonlinear energy transfer and the dissipation spectrum. In the stabilized cases, the role of rotation is mainly nonlinear, and interacting inertial waves can affect almost all scales as in purely rotating flows. In order to isolate the nonlinear effect of rotation, the two-dimensional manifold with vanishing spanwise wave number is revisited and both two-component spectra and single-point two-dimensional energy components exhibit an important effect of rotation, whereas the SLT as well as the purely two-dimensional nonlinear analysis are unaffected by rotation as stated by the Proudman theorem. The other two-dimensional manifold with vanishing streamwise wave number is analyzed with similar tools because it is essential for any shear flow. Finally, the spectral approach is used to disentangle, in an analytical way, the linear and nonlinear terms in the dynamical equations.

  2. Numerical simulation of the geometrical-optics reduction of CE2 and comparisons to quasilinear dynamics

    NASA Astrophysics Data System (ADS)

    Parker, Jeffrey B.

    2018-05-01

    Zonal flows have been observed to appear spontaneously from turbulence in a number of physical settings. A complete theory for their behavior is still lacking. Recently, a number of studies have investigated the dynamics of zonal flows using quasilinear (QL) theories and the statistical framework of a second-order cumulant expansion (CE2). A geometrical-optics (GO) reduction of CE2, derived under an assumption of separation of scales between the fluctuations and the zonal flow, is studied here numerically. The reduced model, CE2-GO, has a similar phase-space mathematical structure to the traditional wave-kinetic equation, but that wave-kinetic equation has been shown to fail to preserve enstrophy conservation and to exhibit an ultraviolet catastrophe. CE2-GO, in contrast, preserves nonlinear conservation of both energy and enstrophy. We show here how to retain these conservation properties in a pseudospectral simulation of CE2-GO. We then present nonlinear simulations of CE2-GO and compare with direct simulations of quasilinear (QL) dynamics. We find that CE2-GO retains some similarities to QL. The partitioning of energy that resides in the zonal flow is in good quantitative agreement between CE2-GO and QL. On the other hand, the length scale of the zonal flow does not follow the same qualitative trend in the two models. Overall, these simulations indicate that CE2-GO provides a simpler and more tractable statistical paradigm than CE2, but CE2-GO is missing important physics.

  3. Optimal control of energy extraction in LES of large wind farms

    NASA Astrophysics Data System (ADS)

    Meyers, Johan; Goit, Jay; Munters, Wim

    2014-11-01

    We investigate the use of optimal control combined with Large-Eddy Simulations (LES) of wind-farm boundary layer interaction for the increase of total energy extraction in very large ``infinite'' wind farms and in finite farms. We consider the individual wind turbines as flow actuators, whose energy extraction can be dynamically regulated in time so as to optimally influence the turbulent flow field, maximizing the wind farm power. For the simulation of wind-farm boundary layers we use large-eddy simulations in combination with an actuator-disk representation of wind turbines. Simulations are performed in our in-house pseudo-spectral code SP-Wind. For the optimal control study, we consider the dynamic control of turbine-thrust coefficients in the actuator-disk model. They represent the effect of turbine blades that can actively pitch in time, changing the lift- and drag coefficients of the turbine blades. In a first infinite wind-farm case, we find that farm power is increases by approximately 16% over one hour of operation. This comes at the cost of a deceleration of the outer layer of the boundary layer. A detailed analysis of energy balances is presented, and a comparison is made between infinite and finite farm cases, for which boundary layer entrainment plays an import role. The authors acknowledge support from the European Research Council (FP7-Ideas, Grant No. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Govern.

  4. Where Tori Fear to Tread: Hypermassive Neutron Star Remnants and Absolute Event Horizons or Topics in Computational General Relativity

    NASA Astrophysics Data System (ADS)

    Kaplan, Jeffrey Daniel

    2014-01-01

    Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnetism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.

  5. Anisotropy in pair dispersion of inertial particles in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Pitton, Enrico; Marchioli, Cristian; Lavezzo, Valentina; Soldati, Alfredo; Toschi, Federico

    2012-07-01

    The rate at which two particles separate in turbulent flows is of central importance to predict the inhomogeneities of particle spatial distribution and to characterize mixing. Pair separation is analyzed for the specific case of small, inertial particles in turbulent channel flow to examine the role of mean shear and small-scale turbulent velocity fluctuations. To this aim an Eulerian-Lagrangian approach based on pseudo-spectral direct numerical simulation (DNS) of fully developed gas-solid flow at shear Reynolds number Reτ = 150 is used. Pair separation statistics have been computed for particles with different inertia (and for inertialess tracers) released from different regions of the channel. Results confirm that shear-induced effects predominate when the pair separation distance becomes comparable to the largest scale of the flow. Results also reveal the fundamental role played by particles-turbulence interaction at the small scales in triggering separation during the initial stages of pair dispersion. These findings are discussed examining Lagrangian observables, including the mean square separation, which provide prima facie evidence that pair dispersion in non-homogeneous anisotropic turbulence has a superdiffusive nature and may generate non-Gaussian number density distributions of both particles and tracers. These features appear to persist even when the effects of shear dispersion are filtered out, and exhibit strong dependency on particle inertia. Application of present results is discussed in the context of modelling approaches for particle dispersion in wall-bounded turbulent flows.

  6. Probabilistic Relationships between Ground‐Motion Parameters and Modified Mercalli Intensity in California

    USGS Publications Warehouse

    Worden, C.B.; Wald, David J.; Rhoades, D.A.

    2012-01-01

    We use a database of approximately 200,000 modified Mercalli intensity (MMI) observations of California earthquakes collected from USGS "Did You Feel It?" (DYFI) reports, along with a comparable number of peak ground-motion amplitudes from California seismic networks, to develop probabilistic relationships between MMI and peak ground velocity (PGV), peak ground acceleration (PGA), and 0.3-s, 1-s, and 3-s 5% damped pseudospectral acceleration (PSA). After associating each ground-motion observation with an MMI computed from all the DYFI responses within 2 km of the observation, we derived a joint probability distribution between MMI and ground motion. We then derived reversible relationships between MMI and each ground-motion parameter by using a total least squares regression to fit a bilinear function to the median of the stacked probability distributions. Among the relationships, the fit to peak ground velocity has the smallest errors, though linear combinations of PGA and PGV give nominally better results. We also find that magnitude and distance terms reduce the overall residuals and are justifiable on an information theoretic basis. For intensities MMI≥5, our results are in close agreement with the relations of Wald, Quitoriano, Heaton, and Kanamori (1999); for lower intensities, our results fall midway between Wald, Quitoriano, Heaton, and Kanamori (1999) and those of Atkinson and Kaka (2007). The earthquakes in the study ranged in magnitude from 3.0 to 7.3, and the distances ranged from less than a kilometer to about 400 km from the source.

  7. Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms

    NASA Astrophysics Data System (ADS)

    Emre Yilmaz, Ali; Meyers, Johan

    2014-06-01

    In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.

  8. Detailed analysis of the effects of stencil spatial variations with arbitrary high-order finite-difference Maxwell solver

    DOE PAGES

    Vincenti, H.; Vay, J. -L.

    2015-11-22

    Due to discretization effects and truncation to finite domains, many electromagnetic simulations present non-physical modifications of Maxwell's equations in space that may generate spurious signals affecting the overall accuracy of the result. Such modifications for instance occur when Perfectly Matched Layers (PMLs) are used at simulation domain boundaries to simulate open media. Another example is the use of arbitrary order Maxwell solver with domain decomposition technique that may under some condition involve stencil truncations at subdomain boundaries, resulting in small spurious errors that do eventually build up. In each case, a careful evaluation of the characteristics and magnitude of themore » errors resulting from these approximations, and their impact at any frequency and angle, requires detailed analytical and numerical studies. To this end, we present a general analytical approach that enables the evaluation of numerical discretization errors of fully three-dimensional arbitrary order finite-difference Maxwell solver, with arbitrary modification of the local stencil in the simulation domain. The analytical model is validated against simulations of domain decomposition technique and PMLs, when these are used with very high-order Maxwell solver, as well as in the infinite order limit of pseudo-spectral solvers. Results confirm that the new analytical approach enables exact predictions in each case. It also confirms that the domain decomposition technique can be used with very high-order Maxwell solver and a reasonably low number of guard cells with negligible effects on the whole accuracy of the simulation.« less

  9. Driving reconnection in sheared magnetic configurations with forced fluctuations

    NASA Astrophysics Data System (ADS)

    Pongkitiwanichakul, Peera; Makwana, Kirit D.; Ruffolo, David

    2018-02-01

    We investigate reconnection of magnetic field lines in sheared magnetic field configurations due to fluctuations driven by random forcing by means of numerical simulations. The simulations are performed with an incompressible, pseudo-spectral magnetohydrodynamics code in 2D where we take thick, resistively decaying, current-sheet like sheared magnetic configurations which do not reconnect spontaneously. We describe and test the forcing that is introduced in the momentum equation to drive fluctuations. It is found that the forcing does not change the rate of decay; however, it adds and removes energy faster in the presence of the magnetic shear structure compared to when it has decayed away. We observe that such a forcing can induce magnetic reconnection due to field line wandering leading to the formation of magnetic islands and O-points. These reconnecting field lines spread out as the current sheet decays with time. A semi-empirical formula is derived which reasonably explains the formation and spread of O-points. We find that reconnection spreads faster with stronger forcing and longer correlation time of forcing, while the wavenumber of forcing does not have a significant effect. When the field line wandering becomes large enough, the neighboring current sheets with opposite polarity start interacting, and then the magnetic field is rapidly annihilated. This work is useful to understand how forced fluctuations can drive reconnection in large scale current structures in space and astrophysical plasmas that are not susceptible to reconnection.

  10. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833

  11. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient

    PubMed Central

    Feng, Shuo

    2014-01-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns. PMID:24834420

  12. Efficient method to design RF pulses for parallel excitation MRI using gridding and conjugate gradient.

    PubMed

    Feng, Shuo; Ji, Jim

    2014-04-01

    Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.

  13. Second harmonic generation efficiency affected by radiation force of a high-energy laser beam through stress within a mounted potassium dihydrogen phosphate crystal

    NASA Astrophysics Data System (ADS)

    Su, Ruifeng; Zhu, Mingzhi; Huang, Zhan; Wang, Baoxu; Wu, Wenkai

    2018-01-01

    Influence of radiation force of a high-energy laser beam on the second harmonic generation (SHG) efficiency through stress within a mounted potassium dihydrogen phosphate (KDP) crystal is studied, as well as an active method of improving the SHG efficiency by controlling the stress is proposed. At first, the model for studying the influence of the radiation force on the SHG efficiency is established, where the radiation force is theoretically analyzed, the stress caused by the radiation force is theoretically analyzed and numerically calculated using the finite-element method, and the influence of the stress on the SHG efficiency is theoretically analyzed. Then, a method of improving the SHG efficiency by controlling the stress through adjusting the structural parameters of the mounting set of the KDP crystal is examined. It demonstrates that the radiation force causes stress within the KDP crystal and further militates against the SHG efficiency; however, the SHG efficiency could be improved by controlling the stress through adjusting the structural parameters of the mounting set of the KDP crystal.

  14. An efficient unstructured WENO method for supersonic reactive flows

    NASA Astrophysics Data System (ADS)

    Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da

    2018-03-01

    An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.

  15. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  16. 10 CFR 431.86 - Uniform test method for the measurement of energy efficiency of commercial packaged boilers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Uniform test method for the measurement of energy efficiency of commercial packaged boilers. 431.86 Section 431.86 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Packaged...

  17. 10 CFR 431.86 - Uniform test method for the measurement of energy efficiency of commercial packaged boilers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Uniform test method for the measurement of energy efficiency of commercial packaged boilers. 431.86 Section 431.86 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Packaged...

  18. 10 CFR 431.86 - Uniform test method for the measurement of energy efficiency of commercial packaged boilers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Uniform test method for the measurement of energy efficiency of commercial packaged boilers. 431.86 Section 431.86 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Packaged...

  19. 10 CFR 431.86 - Uniform test method for the measurement of energy efficiency of commercial packaged boilers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uniform test method for the measurement of energy efficiency of commercial packaged boilers. 431.86 Section 431.86 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Packaged...

  20. Effectiveness and Efficiency of Flashcard Drill Instructional Methods on Urban First-Graders' Word Recognition, Acquisition, Maintenance, and Generalization

    ERIC Educational Resources Information Center

    Nist, Lindsay; Joseph, Laurice M.

    2008-01-01

    This investigation built upon previous studies that compared effectiveness and efficiency among instructional methods. Instructional effectiveness and efficiency were compared among three conditions: an incremental rehearsal, a more challenging ratio of known to unknown interspersal word procedure, and a traditional drill and practice flashcard…

  1. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Astrophysics Data System (ADS)

    Zimmerman, A. H.

    1987-09-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  2. Real time charge efficiency monitoring for nickel electrodes in NICD and NIH2 cells

    NASA Technical Reports Server (NTRS)

    Zimmerman, A. H.

    1987-01-01

    The charge efficiency of nickel-cadmium and nickel-hydrogen battery cells is critical in spacecraft applications for determining the amount of time required for a battery to reach a full state of charge. As the nickel-cadmium or nickel-hydrogen batteries approach about 90 percent state of charge, the charge efficiency begins to drop towards zero, making estimation of the total amount of stored charge uncertain. Charge efficiency estimates are typically based on prior history of available capacity following standardized conditions for charge and discharge. These methods work well as long as performance does not change significantly. A relatively simple method for determining charge efficiencies during real time operation for these battery cells would be a tremendous advantage. Such a method was explored and appears to be quite well suited for application to nickel-cadmium and nickel-hydrogen battery cells. The charge efficiency is monitored in real time, using only voltage measurements as inputs. With further evaluation such a method may provide a means to better manage charge control of batteries, particularly in systems where a high degree of autonomy or system intelligence is required.

  3. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  4. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... method or methods used; the mathematical model, the engineering or statistical analysis, computer... accordance with § 431.16 of this subpart, or by application of an alternative efficiency determination method... must be: (i) Derived from a mathematical model that represents the mechanical and electrical...

  5. Method of preparing and handling chopped plant materials

    DOEpatents

    Bransby, David I.

    2002-11-26

    The method improves efficiency of harvesting, storage, transport, and feeding of dry plant material to animals, and is a more efficient method for harvesting, handling and transporting dry plant material for industrial purposes, such as for production of bioenergy, and composite panels.

  6. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  7. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  8. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  9. Development of Quenching-qPCR (Q-Q) assay for measuring absolute intracellular cleavage efficiency of ribozyme.

    PubMed

    Kim, Min Woo; Sun, Gwanggyu; Lee, Jung Hyuk; Kim, Byung-Gee

    2018-06-01

    Ribozyme (Rz) is a very attractive RNA molecule in metabolic engineering and synthetic biology fields where RNA processing is required as a control unit or ON/OFF signal for its cleavage reaction. In order to use Rz for such RNA processing, Rz must have highly active and specific catalytic activity. However, current methods for assessing the intracellular activity of Rz have limitations such as difficulty in handling and inaccuracies in the evaluation of correct cleavage activity. In this paper, we proposed a simple method to accurately measure the "intracellular cleavage efficiency" of Rz. This method deactivates unwanted activity of Rz which may consistently occur after cell lysis using DNA quenching method, and calculates the cleavage efficiency by analyzing the cleaved fraction of mRNA by Rz from the total amount of mRNA containing Rz via quantitative real-time PCR (qPCR). The proposed method was applied to measure "intracellular cleavage efficiency" of sTRSV, a representative Rz, and its mutant, and their intracellular cleavage efficiencies were calculated as 89% and 93%, respectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  11. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  12. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  13. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  14. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  15. 40 CFR Appendix D to Part 60 - Required Emission Inventory Information

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... one device operates in series. The method of control efficiency determination shall be indicated (e.g., design efficiency, measured efficiency, estimated efficiency). (iii) Annual average control efficiency, in percent, taking into account control equipment down time. This shall be a combined efficiency when...

  16. Selection of appropriate isolation method based on morphology of blastocyst for efficient derivation of buffalo embryonic stem cells.

    PubMed

    Kumar, R; Ahlawat, S P S; Sharma, M; Verma, O P; Sai Kumar, G; Taru Sharma, G

    2014-03-01

    The efficiency of embryonic stem cell (ESC) derivation from all species except for rodents and primates is very low. There are however, multiple interests in obtaining pluripotent cells from these animals with main expectations in the fields of transgenesis, cloning, regenerative medicine and tissue engineering. Researches are being carried out in laboratories throughout the world to increase the efficiency of ESC isolation for their downstream applications. Thus, the present study was undertaken to study the effect of different isolation methods based on the morphology of blastocyst for efficient derivation of buffalo ESCs. Embryos were produced in vitro through the procedures of maturation, fertilization and culture. Hatched blastocysts or isolated inner cell masses (ICMs) were seeded on mitomycin-C inactivated buffalo fetal fibroblast monolayer for the development of ESC colonies. The ESCs were analyzed for alkaline phosphatase activity, expression of pluripotency markers and karyotypic stability. Primary ESC colonies were obtained after 2-5 days of seeding hatched blastocysts or isolated ICMs on mitomycin-C inactivated feeder layer. Mechanically isolated ICMs attached and formed primary cell colonies more efficiently than ICMs isolated enzymatically. For derivation of ESCs from poorly defined ICMs intact hatched blastocyst culture was the most successful method. Results of this study implied that although ESCs can be obtained using all three methods used in this study, efficiency varies depending upon the morphology of blastocyst and isolation method used. So, appropriate isolation method must be selected depending on the quality of blastocyst for efficient derivation of ESCs.

  17. Reuse of imputed data in microarray analysis increases imputation efficiency

    PubMed Central

    Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su

    2004-01-01

    Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240

  18. A Drive Method of Permanent Magnet Synchronous Motor Using Torque Angle Estimation without Position Sensor

    NASA Astrophysics Data System (ADS)

    Tanaka, Takuro; Takahashi, Hisashi

    In some motor applications, it is very difficult to attach a position sensor to the motor in housing. One of the examples of such applications is the dental handpiece-motor. In those designs, it is necessary to drive highly efficiency at low speed and variable load condition without a position sensor. We developed a method to control a motor high-efficient and smoothly at low speed without a position sensor. In this paper, the method in which permanent magnet synchronous motor is controlled smoothly and high-efficient by using torque angle control in synchronized operation is shown. The usefulness is confirmed by experimental results. In conclusion, the proposed sensor-less control method has been achieved to be very efficiently and smoothly.

  19. Tild-CRISPR Allows for Efficient and Precise Gene Knockin in Mouse and Human Cells.

    PubMed

    Yao, Xuan; Zhang, Meiling; Wang, Xing; Ying, Wenqin; Hu, Xinde; Dai, Pengfei; Meng, Feilong; Shi, Linyu; Sun, Yun; Yao, Ning; Zhong, Wanxia; Li, Yun; Wu, Keliang; Li, Weiping; Chen, Zi-Jiang; Yang, Hui

    2018-05-21

    The targeting efficiency of knockin sequences via homologous recombination (HR) is generally low. Here we describe a method we call Tild-CRISPR (targeted integration with linearized dsDNA-CRISPR), a targeting strategy in which a PCR-amplified or precisely enzyme-cut transgene donor with 800-bp homology arms is injected with Cas9 mRNA and single guide RNA into mouse zygotes. Compared with existing targeting strategies, this method achieved much higher knockin efficiency in mouse embryos, as well as brain tissue. Importantly, the Tild-CRISPR method also yielded up to 12-fold higher knockin efficiency than HR-based methods in human embryos, making it suitable for studying gene functions in vivo and developing potential gene therapies. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. A novel scene management technology for complex virtual battlefield environment

    NASA Astrophysics Data System (ADS)

    Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan

    2018-04-01

    The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.

  1. Design of A Cyclone Separator Using Approximation Method

    NASA Astrophysics Data System (ADS)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  2. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  3. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  4. A combined experimental-modelling method for the detection and analysis of pollution in coastal zones

    NASA Astrophysics Data System (ADS)

    Limić, Nedzad; Valković, Vladivoj

    1996-04-01

    Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.

  5. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  6. Tracking of Indels by DEcomposition is a Simple and Effective Method to Assess Efficiency of Guide RNAs in Zebrafish.

    PubMed

    Etard, Christelle; Joshi, Swarnima; Stegmaier, Johannes; Mikut, Ralf; Strähle, Uwe

    2017-12-01

    A bottleneck in CRISPR/Cas9 genome editing is variable efficiencies of in silico-designed gRNAs. We evaluated the sensitivity of the TIDE method (Tracking of Indels by DEcomposition) introduced by Brinkman et al. in 2014 for assessing the cutting efficiencies of gRNAs in zebrafish. We show that this simple method, which involves bulk polymerase chain reaction amplification and Sanger sequencing, is highly effective in tracking well-performing gRNAs in pools of genomic DNA derived from injected embryos. The method is equally effective for tracing INDELs in heterozygotes.

  7. An Evaluation of the Efficiency of Different Hygienisation Methods

    NASA Astrophysics Data System (ADS)

    Zrubková, M.

    2017-10-01

    The aim of this study is to evaluate the efficiency of hygienisation by pasteurisation, temperature-phased anaerobic digestion and sludge liming. A summary of the legislation concerning sludge treatment, disposal and recycling is included. The hygienisation methods are compared not only in terms of hygienisation efficiency but a comparison of other criteria is also included.

  8. 10 CFR 431.96 - Uniform test method for the measurement of energy efficiency of commercial air conditioners and...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... efficiency of commercial air conditioners and heat pumps. 431.96 Section 431.96 Energy DEPARTMENT OF ENERGY... Air Conditioners and Heat Pumps Test Procedures § 431.96 Uniform test method for the measurement of energy efficiency of commercial air conditioners and heat pumps. (a) Scope. This section contains test...

  9. 10 CFR 431.96 - Uniform test method for the measurement of energy efficiency of commercial air conditioners and...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... efficiency of commercial air conditioners and heat pumps. 431.96 Section 431.96 Energy DEPARTMENT OF ENERGY... Air Conditioners and Heat Pumps Test Procedures § 431.96 Uniform test method for the measurement of energy efficiency of commercial air conditioners and heat pumps. (a) Scope. This section contains test...

  10. Determination of GTA Welding Efficiencies

    DTIC Science & Technology

    1993-03-01

    continue on reverse if ncessary andidentify by block number) A method is developed for estimating welding efficiencies for moving arc GTAW processes...Dutta, Co-Advi r Department of Mechanical Engineering ii ABSTRACT A method is developed for estimating welding efficiencies for moving arc GTAW ...17 Figure 10. Miller Welding Equipment ............. ... 18 Figure 11. GTAW Torch Setup for Automatic Welding. . 19 Figure 12

  11. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  12. Spectral difference Lanczos method for efficient time propagation in quantum control theory

    NASA Astrophysics Data System (ADS)

    Farnum, John D.; Mazziotti, David A.

    2004-04-01

    Spectral difference methods represent the real-space Hamiltonian of a quantum system as a banded matrix which possesses the accuracy of the discrete variable representation (DVR) and the efficiency of finite differences. When applied to time-dependent quantum mechanics, spectral differences enhance the efficiency of propagation methods for evolving the Schrödinger equation. We develop a spectral difference Lanczos method which is computationally more economical than the sinc-DVR Lanczos method, the split-operator technique, and even the fast-Fourier-Transform Lanczos method. Application of fast propagation is made to quantum control theory where chirped laser pulses are designed to dissociate both diatomic and polyatomic molecules. The specificity of the chirped laser fields is also tested as a possible method for molecular identification and discrimination.

  13. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  14. Power and spectrally efficient M-ARY QAM schemes for future mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Sreenath, K.; Feher, K.

    1990-01-01

    An effective method to compensate nonlinear phase distortion caused by the mobile amplifier is proposed. As a first step towards the future use of spectrally efficient modulation schemes for mobile satellite applications, we have investigated effects of nonlinearities and the phase compensation method on 16-QAM. The new method provides about 2 dB savings in power for 16-QAM operation with cost effective amplifiers near saturation and thereby promising use of spectrally efficient linear modulation schemes for future mobile satellite applications.

  15. Efficient generation of integration-free human induced pluripotent stem cells from keratinocytes by simple transfection of episomal vectors.

    PubMed

    Piao, Yulan; Hung, Sandy Shen-Chi; Lim, Shiang Y; Wong, Raymond Ching-Bong; Ko, Minoru S H

    2014-07-01

    Keratinocytes represent an easily accessible cell source for derivation of human induced pluripotent stem (hiPS) cells, reportedly achieving higher reprogramming efficiency than fibroblasts. However, most studies utilized a retroviral or lentiviral method for reprogramming of keratinocytes, which introduces undesirable transgene integrations into the host genome. Moreover, current protocols of generating integration-free hiPS cells from keratinocytes are mostly inefficient. In this paper, we describe a more efficient, simple-to-use, and cost-effective method for generating integration-free hiPS cells from keratinocytes. Our improved method using lipid-mediated transfection achieved a reprogramming efficiency of ∼0.14% on average. Keratinocyte-derived hiPS cells showed no integration of episomal vectors, expressed stem cell-specific markers and possessed potentials to differentiate into all three germ layers by in vitro embryoid body formation as well as in vivo teratoma formation. To our knowledge, this represents the most efficient method to generate integration-free hiPS cells from keratinocytes. ©AlphaMed Press.

  16. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  17. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  18. 3D seismic modeling in geothermal reservoirs with a distribution of steam patch sizes, permeabilities and saturations, including ductility of the rock frame

    NASA Astrophysics Data System (ADS)

    Carcione, José M.; Poletto, Flavio; Farina, Biancamaria; Bellezza, Cinzia

    2018-06-01

    Seismic propagation in the upper part of the crust, where geothermal reservoirs are located, shows generally strong velocity dispersion and attenuation due to varying permeability and saturation conditions and is affected by the brittleness and/or ductility of the rocks, including zones of partial melting. From the elastic-plastic aspect, the seismic properties (seismic velocity, quality factor and density) depend on effective pressure and temperature. We describe the related effects with a Burgers mechanical element for the shear modulus of the dry-rock frame. The Arrhenius equation combined to the octahedral stress criterion define the Burgers viscosity responsible of the brittle-ductile behaviour. The effects of permeability, partial saturation, varying porosity and mineral composition on the seismic properties is described by a generalization of the White mesoscopic-loss model to the case of a distribution of heterogeneities of those properties. White model involves the wave-induced fluid flow attenuation mechanism, by which seismic waves propagating through small-scale heterogeneities, induce pressure gradients between regions of dissimilar properties, where part of the energy of the fast P-wave is converted to slow P (Biot)-wave. We consider a range of variations of the radius and size of the patches and thin layers whose probability density function is defined by different distributions. The White models used here are that of spherical patches (for partial saturation) and thin layers (for permeability heterogeneities). The complex bulk modulus of the composite medium is obtained with the Voigt-Reuss-Hill average. Effective pressure effects are taken into account by using exponential functions. We then solve the 3D equation of motion in the space-time domain, by approximating the White complex bulk modulus with that of a set of Zener elements connected in series. The Burgers and generalized Zener models allows us to solve the equations with a direct grid method by the introduction of memory variables. The algorithm uses the Fourier pseudospectral method to compute the spatial derivatives. It is tested against an analytical solution obtained with the correspondence principle. We consider two main cases, namely the same rock frame (uniform porosity and permeability) saturated with water and a distribution of steam patches, and water-saturated background medium with thin layers of dissimilar permeability. Our model indicates how seismic properties change with the geothermal reservoir temperature and pressure, showing that both seismic velocity and attenuation can be used as a diagnostic tool to estimate the in situ conditions.

  19. Combined chemical and physical transformation method with RbCl and sepiolite for the transformation of various bacterial species.

    PubMed

    Ren, Jun; Lee, Haram; Yoo, Seung Min; Yu, Myeong-Sang; Park, Hansoo; Na, Dokyun

    2017-04-01

    DNA transformation that delivers plasmid DNAs into bacterial cells is fundamental in genetic manipulation to engineer and study bacteria. Developed transformation methods to date are optimized to specific bacterial species for high efficiency. Thus, there is always a demand for simple and species-independent transformation methods. We herein describe the development of a chemico-physical transformation method that combines a rubidium chloride (RbCl)-based chemical method and sepiolite-based physical method, and report its use for the simple and efficient delivery of DNA into various bacterial species. Using this method, the best transformation efficiency for Escherichia coli DH5α was 4.3×10 6 CFU/μg of pUC19 plasmid, which is higher than or comparable to the reported transformation efficiencies to date. This method also allowed the introduction of plasmid DNAs into Bacillus subtilis (5.7×10 3 CFU/μg of pSEVA3b67Rb), Bacillus megaterium (2.5×10 3 CFU/μg of pSPAsp-hp), Lactococcus lactis subsp. lactis (1.0×10 2 CFU/μg of pTRKH3-ermGFP), and Lactococcus lactis subsp. cremoris (2.2×10 2 CFU/μg of pMSP3535VA). Remarkably, even when the conventional chemical and physical methods failed to generate transformed cells in Bacillus sp. and Enterococcus faecalis, E. malodoratus and E. mundtii, our combined method showed a significant transformation efficiency (2.4×10 4 , 4.5×10 2 , 2×10 1 , and 0.5×10 1 CFU/μg of plasmid DNA). Based on our results, we anticipate that our simple and efficient transformation method should prove usefulness for introducing DNA into various bacterial species without complicated optimization of parameters affecting DNA entry into the cell. Copyright © 2017. Published by Elsevier B.V.

  20. Is there an efficient trap or collection method for sampling Anopheles darlingi and other malaria vectors that can describe the essential parameters affecting transmission dynamics as effectively as human landing catches? - A Review

    PubMed Central

    Lima, José Bento Pereira; Rosa-Freitas, Maria Goreti; Rodovalho, Cynara Melo; Santos, Fátima; Lourenço-de-Oliveira, Ricardo

    2014-01-01

    Distribution, abundance, feeding behaviour, host preference, parity status and human-biting and infection rates are among the medical entomological parameters evaluated when determining the vector capacity of mosquito species. To evaluate these parameters, mosquitoes must be collected using an appropriate method. Malaria is primarily transmitted by anthropophilic and synanthropic anophelines. Thus, collection methods must result in the identification of the anthropophilic species and efficiently evaluate the parameters involved in malaria transmission dynamics. Consequently, human landing catches would be the most appropriate method if not for their inherent risk. The choice of alternative anopheline collection methods, such as traps, must consider their effectiveness in reproducing the efficiency of human attraction. Collection methods lure mosquitoes by using a mixture of olfactory, visual and thermal cues. Here, we reviewed, classified and compared the efficiency of anopheline collection methods, with an emphasis on Neotropical anthropophilic species, especially Anopheles darlingi, in distinct malaria epidemiological conditions in Brazil. PMID:25185008

  1. Evaluation of Saltzman and phenoldisulfonic acid methods for determining NO/sub x/ in engine exhaust gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groth, R.H.; Calabro, D.S.

    1969-11-01

    The two methods normally used for the analysis of NO/sub x/ are the Saltzman and the phenoldisulfonic acid technique. This paper describes an evaluation of these wet chemical methods to determine their practical application to engine exhaust gas analysis. Parameters considered for the Saltzman method included bubbler collection efficiency, NO to NO/sub 2/ conversion efficiency, masking effect of other contaminants usually present in exhaust gases and the time-temperature effect of these contaminants on store developed solutions. Collection efficiency and the effects of contaminants were also considered for the phenoldisulfonic acid method. Test results indicated satisfactory collection and conversion efficiencies formore » the Saltzman method, but contaminants seriously affected the measurement accuracy particularly if the developed solution was stored for a number of hours at room temperature before analysis. Storage at 32/sup 0/F minimized effect. The standard procedure for the phenoldisulfonic acid method gave good results, but the process was found to be too time consuming for routine analysis and measured only total NO/sub x/. 3 references, 9 tables.« less

  2. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  3. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  4. Design of spur gears for improved efficiency

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.

    1981-01-01

    A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.

  5. Calculated Coupling Efficiency Between an Elliptical-Core Optical Fiber and a Silicon Oxynitride Rib Waveguide [Corrected Copy

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Beheim, Glenn

    1995-01-01

    The effective-index method and Marcatili's technique were utilized independently to calculate the electric field profile of a rib channel waveguide. Using the electric field profile calculated from each method, the theoretical coupling efficiency between a single-mode optical fiber and a rib waveguide was calculated using the overlap integral. Perfect alignment was assumed and the coupling efficiency calculated. The coupling efficiency calculation was then repeated for a range of transverse offsets.

  6. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  7. Experimental study of the influence of the counter and scintillator on the universal curves in the cross-efficiency method in LSC.

    PubMed

    Cassette, P; Tartès, I

    2014-05-01

    The cross-efficiency method in LSC is one of the approaches proposed for the extension of the Système International de Référence (SIR) to radionuclides emitting no gamma radiation. This method is based on a so-called "universal cross-efficiency curve", establishing a relationship between the detection efficiency of the radionuclide to be measured and the detection efficiency of a suitable tracer. This paper reports a study at LNHB on the influence of the scintillator and of the LS counter on the cross-efficiency curves. This was done by measuring the cross-efficiency curves obtained for (63)Ni and (55)Fe vs. (3)H, using three different commercial LS counters (Guardian 1414, Tricarb 3170 and Quantulus 1220), three different liquid scintillator cocktails (Ultima Gold, Hionic Fluor and PicoFluor 15 from Perkin Elmer(®)), and for chemical and colour-quenched sources. This study shows that these cross-efficiency curves are dependent on the scintillator, on the counter used and on the nature of the quenching phenomenon, and thus cannot definitively be considered as "universal". © 2013 Published by Elsevier Ltd.

  8. Agarose droplet microfluidics for highly parallel and efficient single molecule emulsion PCR.

    PubMed

    Leng, Xuefei; Zhang, Wenhua; Wang, Chunming; Cui, Liang; Yang, Chaoyong James

    2010-11-07

    An agarose droplet method was developed for highly parallel and efficient single molecule emulsion PCR. The method capitalizes on the unique thermoresponsive sol-gel switching property of agarose for highly efficient DNA amplification and amplicon trapping. Uniform agarose solution droplets generated via a microfluidic chip serve as robust and inert nanolitre PCR reactors for single copy DNA molecule amplification. After PCR, agarose droplets are gelated to form agarose beads, trapping all amplicons in each reactor to maintain the monoclonality of each droplet. This method does not require cocapsulation of primer labeled microbeads, allows high throughput generation of uniform droplets and enables high PCR efficiency, making it a promising platform for many single copy genetic studies.

  9. Efficient kinetic Monte Carlo method for reaction-diffusion problems with spatially varying annihilation rates

    NASA Astrophysics Data System (ADS)

    Schwarz, Karsten; Rieger, Heiko

    2013-03-01

    We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.

  10. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    NASA Astrophysics Data System (ADS)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  11. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  12. An Efficient Model-Based Image Understanding Method for an Autonomous Vehicle.

    DTIC Science & Technology

    1997-09-01

    The problem discussed in this dissertation is the development of an efficient method for visual navigation of autonomous vehicles . The approach is to... autonomous vehicles . Thus the new method is implemented as a component of the image-understanding system in the autonomous mobile robot Yamabico-11 at

  13. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  14. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    NASA Astrophysics Data System (ADS)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  15. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi, E-mail: samtaney@pppl.go; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations - so-called 'textbook' multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  16. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2010-09-01

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called ‘‘textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss–Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  17. Toward textbook multigrid efficiency for fully implicit resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Mark F.; Samtaney, Ravi; Brandt, Achi

    2013-12-14

    Multigrid methods can solve some classes of elliptic and parabolic equations to accuracy below the truncation error with a work-cost equivalent to a few residual calculations – so-called “textbook” multigrid efficiency. We investigate methods to solve the system of equations that arise in time dependent magnetohydrodynamics (MHD) simulations with textbook multigrid efficiency. We apply multigrid techniques such as geometric interpolation, full approximate storage, Gauss-Seidel smoothers, and defect correction for fully implicit, nonlinear, second-order finite volume discretizations of MHD. We apply these methods to a standard resistive MHD benchmark problem, the GEM reconnection problem, and add a strong magnetic guide field,more » which is a critical characteristic of magnetically confined fusion plasmas. We show that our multigrid methods can achieve near textbook efficiency on fully implicit resistive MHD simulations.« less

  18. 10 CFR 431.107 - Uniform test method for the measurement of energy efficiency of commercial heat pump water...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Water Heaters, Hot Water Supply Boilers...

  19. 10 CFR 431.107 - Uniform test method for the measurement of energy efficiency of commercial heat pump water...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Commercial Water Heaters, Hot Water Supply Boilers...

  20. Technical efficiency and resources allocation in university hospitals in Tehran, 2009-2012

    PubMed Central

    Rezapour, Aziz; Ebadifard Azar, Farbod; Yousef Zadeh, Negar; Roumiani, YarAllah; Bagheri Faradonbeh, Saeed

    2015-01-01

    Background: Assessment of hospitals’ performance in achieving its goals is a basic necessity. Measuring the efficiency of hospitals in order to boost resource productivity in healthcare organizations is extremely important. The aim of this study was to measure technical efficiency and determining status of resource allocation in some university hospitals, in Tehran, Iran. Methods: This study was conducted in 2012; the research population consisted of all hospitals affiliated to Iran and Tehran medical sciences universities of. Required data, such as human and capital resources information and also production variables (hospital outputs) were collected from data centers of studied hospitals. Data were analyzed using data envelopment analysis (DEA) method, Deap2,1 software; and the stochastic frontier analysis (SFA) method, Frontier 4,1 software. Results: According to DEA method, average of technical, management (pure) and scale efficiency of the studied hospitals during the study period were calculated 0.87, 0.971, and 0.907, respectively. All kinds of efficiency did not follow a fixed trend over the study time and were constantly changing. In the stochastic frontier's production function analysis, the technical efficiency of the studied industry during the study period was estimated to be 0.389. Conclusion: This study represented hospitals with the highest and lowest efficiency. Reference hospitals (more efficient states) were indicated for the inefficient centers. According to the findings, it was found that in the hospitals that do not operate efficiently, there is a capacity to improve the technical efficiency by removing excess inputs without changes in the level of outputs. However, by the optimal allocation of resources in most studied hospitals, very important economy of scale can be achieved. PMID:26793657

  1. Numerical solution of the Saint-Venant equations by an efficient hybrid finite-volume/finite-difference method

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Khan, Abdul A.

    2018-04-01

    A computationally efficient hybrid finite-volume/finite-difference method is proposed for the numerical solution of Saint-Venant equations in one-dimensional open channel flows. The method adopts a mass-conservative finite volume discretization for the continuity equation and a semi-implicit finite difference discretization for the dynamic-wave momentum equation. The spatial discretization of the convective flux term in the momentum equation employs an upwind scheme and the water-surface gradient term is discretized using three different schemes. The performance of the numerical method is investigated in terms of efficiency and accuracy using various examples, including steady flow over a bump, dam-break flow over wet and dry downstream channels, wetting and drying in a parabolic bowl, and dam-break floods in laboratory physical models. Numerical solutions from the hybrid method are compared with solutions from a finite volume method along with analytic solutions or experimental measurements. Comparisons demonstrates that the hybrid method is efficient, accurate, and robust in modeling various flow scenarios, including subcritical, supercritical, and transcritical flows. In this method, the QUICK scheme for the surface slope discretization is more accurate and less diffusive than the center difference and the weighted average schemes.

  2. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.

    PubMed

    Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-03-01

    The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.

  3. Introduction of the trapezoidal thermodynamic technique method for measuring and mapping the efficiency of waste-to-energy plants: A potential replacement to the R1 formula.

    PubMed

    Vakalis, Stergios; Moustakas, Konstantinos; Loizidou, Maria

    2018-06-01

    Waste-to-energy plants have the peculiarity of being considered both as energy production and as waste destruction facilities and this distinction is important for legislative reasons. The efficiency of waste-to-energy plants must be objective and consistent, independently if the focus is the production of energy, the destruction of waste or the recovery/upgrade of materials. With the introduction of polygeneration technologies, like gasification, the production of energy and the recovery/upgrade of materials, are interconnected. The existing methodology for assessing the efficiency of waste-to-energy plants is the R1 formula, which does not take into consideration the full spectrum of the operations that take place in waste-to-energy plants. This study introduces a novel methodology for assessing the efficiency of waste-to-energy plants and is defined as the 3T method, which stands for 'trapezoidal thermodynamic technique'. The 3T method is an integrated approach for assessing the efficiency of waste-to-energy plants, which takes into consideration not only the production of energy but also the quality of the products. The value that is returned from the 3T method can be placed in a tertiary diagram and the global efficiency map of waste-to-energy plants can be produced. The application of the 3T method showed that the waste-to-energy plants with high combined heat and power efficiency and high recovery of materials are favoured and these outcomes are in accordance with the cascade principle and with the high cogeneration standards that are set by the EU Energy Efficiency Directive.

  4. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  5. A sub-space greedy search method for efficient Bayesian Network inference.

    PubMed

    Zhang, Qing; Cao, Yong; Li, Yong; Zhu, Yanming; Sun, Samuel S M; Guo, Dianjing

    2011-09-01

    Bayesian network (BN) has been successfully used to infer the regulatory relationships of genes from microarray dataset. However, one major limitation of BN approach is the computational cost because the calculation time grows more than exponentially with the dimension of the dataset. In this paper, we propose a sub-space greedy search method for efficient Bayesian Network inference. Particularly, this method limits the greedy search space by only selecting gene pairs with higher partial correlation coefficients. Using both synthetic and real data, we demonstrate that the proposed method achieved comparable results with standard greedy search method yet saved ∼50% of the computational time. We believe that sub-space search method can be widely used for efficient BN inference in systems biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Calibration of а single hexagonal NaI(Tl) detector using a new numerical method based on the efficiency transfer method

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud I.; Badawi, M. S.; Ruskov, I. N.; El-Khatib, A. M.; Grozdanov, D. N.; Thabet, A. A.; Kopatch, Yu. N.; Gouda, M. M.; Skoy, V. R.

    2015-01-01

    Gamma-ray detector systems are important instruments in a broad range of science and new setup are continually developing. The most recent step in the evolution of detectors for nuclear spectroscopy is the construction of large arrays of detectors of different forms (for example, conical, pentagonal, hexagonal, etc.) and sizes, where the performance and the efficiency can be increased. In this work, a new direct numerical method (NAM), in an integral form and based on the efficiency transfer (ET) method, is used to calculate the full-energy peak efficiency of a single hexagonal NaI(Tl) detector. The algorithms and the calculations of the effective solid angle ratios for a point (isotropic irradiating) gamma-source situated coaxially at different distances from the detector front-end surface, taking into account the attenuation of the gamma-rays in the detector's material, end-cap and the other materials in-between the gamma-source and the detector, are considered as the core of this (ET) method. The calculated full-energy peak efficiency values by the (NAM) are found to be in a good agreement with the measured experimental data.

  7. Leveraging Gibbs Ensemble Molecular Dynamics and Hybrid Monte Carlo/Molecular Dynamics for Efficient Study of Phase Equilibria.

    PubMed

    Gartner, Thomas E; Epps, Thomas H; Jayaraman, Arthi

    2016-11-08

    We describe an extension of the Gibbs ensemble molecular dynamics (GEMD) method for studying phase equilibria. Our modifications to GEMD allow for direct control over particle transfer between phases and improve the method's numerical stability. Additionally, we found that the modified GEMD approach had advantages in computational efficiency in comparison to a hybrid Monte Carlo (MC)/MD Gibbs ensemble scheme in the context of the single component Lennard-Jones fluid. We note that this increase in computational efficiency does not compromise the close agreement of phase equilibrium results between the two methods. However, numerical instabilities in the GEMD scheme hamper GEMD's use near the critical point. We propose that the computationally efficient GEMD simulations can be used to map out the majority of the phase window, with hybrid MC/MD used as a follow up for conditions under which GEMD may be unstable (e.g., near-critical behavior). In this manner, we can capitalize on the contrasting strengths of these two methods to enable the efficient study of phase equilibria for systems that present challenges for a purely stochastic GEMC method, such as dense or low temperature systems, and/or those with complex molecular topologies.

  8. An accurate and efficient method for piezoelectric coated functional devices based on the two-dimensional Green’s function for a normal line force and line charge

    NASA Astrophysics Data System (ADS)

    Hou, Peng-Fei; Zhang, Yang

    2017-09-01

    Because most piezoelectric functional devices, including sensors, actuators and energy harvesters, are in the form of a piezoelectric coated structure, it is valuable to present an accurate and efficient method for obtaining the electro-mechanical coupling fields of this coated structure under mechanical and electrical loads. With this aim, the two-dimensional Green’s function for a normal line force and line charge on the surface of coated structure, which is a combination of an orthotropic piezoelectric coating and orthotropic elastic substrate, is presented in the form of elementary functions based on the general solution method. The corresponding electro-mechanical coupling fields of this coated structure under arbitrary mechanical and electrical loads can then be obtained by the superposition principle and Gauss integration. Numerical results show that the presented method has high computational precision, efficiency and stability. It can be used to design the best coating thickness in functional devices, improve the sensitivity of sensors, and improve the efficiency of actuators and energy harvesters. This method could be an efficient tool for engineers in engineering applications.

  9. Fast Reduction Method in Dominance-Based Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yan; Zhou, Qinghua; Wen, Yongchuan

    2018-01-01

    In real world applications, there are often some data with continuous values or preference-ordered values. Rough sets based on dominance relations can effectively deal with these kinds of data. Attribute reduction can be done in the framework of dominance-relation based approach to better extract decision rules. However, the computational cost of the dominance classes greatly affects the efficiency of attribute reduction and rule extraction. This paper presents an efficient method of computing dominance classes, and further compares it with traditional method with increasing attributes and samples. Experiments on UCI data sets show that the proposed algorithm obviously improves the efficiency of the traditional method, especially for large-scale data.

  10. General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED

    NASA Astrophysics Data System (ADS)

    Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.

    2018-02-01

    We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.

  11. The efficiency and effectiveness of utilizing diagrams in interviews: an assessment of participatory diagramming and graphic elicitation.

    PubMed

    Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P

    2008-08-08

    This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs.

  12. A Sector Capacity Assessment Method Based on Airspace Utilization Efficiency

    NASA Astrophysics Data System (ADS)

    Zhang, Jianping; Zhang, Ping; Li, Zhen; Zou, Xiang

    2018-02-01

    Sector capacity is one of the core factors affecting the safety and the efficiency of the air traffic system. Most of previous sector capacity assessment methods only considered the air traffic controller’s (ATCO’s) workload. These methods are not only limited which only concern about the safety, but also not accurate enough. In this paper, we employ the integrated quantitative index system proposed in one of our previous literatures. We use the principal component analysis (PCA) to find out the principal indicators among the indicators so as to calculate the airspace utilization efficiency. In addition, we use a series of fitting functions to test and define the correlation between the dense of air traffic flow and the airspace utilization efficiency. The sector capacity is then decided as the value of the dense of air traffic flow corresponding to the maximum airspace utilization efficiency. We also use the same series of fitting functions to test the correlation between the dese of air traffic flow and the ATCOs’ workload. We examine our method with a large amount of empirical operating data of Chengdu Controlling Center and obtain a reliable sector capacity value. Experiment results also show superiority of our method against those only consider the ATCO’s workload in terms of better correlation between the airspace utilization efficiency and the dense of air traffic flow.

  13. Calculated coupling efficiency between an elliptical-core optical fiber and an optical waveguide over temperature

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Weisshaar, Andreas; Li, Jian; Beheim, Glenn

    1995-01-01

    To determine the feasibility of coupling the output of a single-mode optical fiber into a single-mode rib waveguide in a temperature varying environment, a theoretical calculation of the coupling efficiency between the two was investigated. Due to the complex geometry of the rib guide, there is no analytical solution to the wave equation for the guided modes, thus, approximation and/or numerical techniques must be utilized to determine the field patterns of the guide. In this study, three solution methods were used for both the fiber and guide fields; the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of each component at two temperatures, 20 C and 300 C, representing a nominal and high temperature. Using the electric field profile calculated from each method, the theoretical coupling efficiency between an elliptical-core optical fiber and a rib waveguide was calculated using the overlap integral and the results were compared. It was determined that a high coupling efficiency can be achieved when the two components are aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal field profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.

  14. Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications

    DOEpatents

    Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI

    2012-05-29

    A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.

  15. High-speed high-efficiency 500-W cw CO2 laser hermetization of metal frames of microelectronics devices

    NASA Astrophysics Data System (ADS)

    Levin, Andrey V.

    1996-04-01

    High-speed, efficient method of laser surface treatment has been developed using (500 W) cw CO2 laser. The principal advantages of CO2 laser surface treatment in comparison with solid state lasers are the basis of the method. It has been affirmed that high efficiency of welding was a consequence of the fundamental properties of metal-IR-radiation (10,6 mkm) interaction. CO2 laser hermetization of metal frames of microelectronic devices is described as an example of the proposed method application.

  16. On the enhanced sampling over energy barriers in molecular dynamics simulations.

    PubMed

    Gao, Yi Qin; Yang, Lijiang

    2006-09-21

    We present here calculations of free energies of multidimensional systems using an efficient sampling method. The method uses a transformed potential energy surface, which allows an efficient sampling of both low and high energy spaces and accelerates transitions over barriers. It allows efficient sampling of the configuration space over and only over the desired energy range(s). It does not require predetermined or selected reaction coordinate(s). We apply this method to study the dynamics of slow barrier crossing processes in a disaccharide and a dipeptide system.

  17. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  18. High-efficiency power transfer for silicon-based photonic devices

    NASA Astrophysics Data System (ADS)

    Son, Gyeongho; Yu, Kyoungsik

    2018-02-01

    We demonstrate an efficient coupling of guided light of 1550 nm from a standard single-mode optical fiber to a silicon waveguide using the finite-difference time-domain method and propose a fabrication method of tapered optical fibers for efficient power transfer to silicon-based photonic integrated circuits. Adiabatically-varying fiber core diameters with a small tapering angle can be obtained using the tube etching method with hydrofluoric acid and standard single-mode fibers covered by plastic jackets. The optical power transmission of the fundamental HE11 and TE-like modes between the fiber tapers and the inversely-tapered silicon waveguides was calculated with the finite-difference time-domain method to be more than 99% at a wavelength of 1550 nm. The proposed method for adiabatic fiber tapering can be applied in quantum optics, silicon-based photonic integrated circuits, and nanophotonics. Furthermore, efficient coupling within the telecommunication C-band is a promising approach for quantum networks in the future.

  19. Rapid and efficient method to extract metagenomic DNA from estuarine sediments.

    PubMed

    Shamim, Kashif; Sharma, Jaya; Dubey, Santosh Kumar

    2017-07-01

    Metagenomic DNA from sediments of selective estuaries of Goa, India was extracted using a simple, fast, efficient and environment friendly method. The recovery of pure metagenomic DNA from our method was significantly high as compared to other well-known methods since the concentration of recovered metagenomic DNA ranged from 1185.1 to 4579.7 µg/g of sediment. The purity of metagenomic DNA was also considerably high as the ratio of absorbance at 260 and 280 nm ranged from 1.88 to 1.94. Therefore, the recovered metagenomic DNA was directly used to perform various molecular biology experiments viz. restriction digestion, PCR amplification, cloning and metagenomic library construction. This clearly proved that our protocol for metagenomic DNA extraction using silica gel efficiently removed the contaminants and prevented shearing of the metagenomic DNA. Thus, this modified method can be used to recover pure metagenomic DNA from various estuarine sediments in a rapid, efficient and eco-friendly manner.

  20. A new integrated evaluation method of heavy metals pollution control during melting and sintering of MSWI fly ash.

    PubMed

    Li, Rundong; Li, Yanlong; Yang, Tianhua; Wang, Lei; Wang, Weiyun

    2015-05-30

    Evaluations of technologies for heavy metal control mainly examine the residual and leaching rates of a single heavy metal, such that developed evaluation method have no coordination or uniqueness and are therefore unsuitable for hazard control effect evaluation. An overall pollution toxicity index (OPTI) was established in this paper, based on the developed index, an integrated evaluation method of heavy metal pollution control was established. Application of this method in the melting and sintering of fly ash revealed the following results: The integrated control efficiency of the melting process was higher in all instances than that of the sintering process. The lowest integrated control efficiency of melting was 56.2%, and the highest integrated control efficiency of sintering was 46.6%. Using the same technology, higher integrated control efficiency conditions were all achieved with lower temperatures and shorter times. This study demonstrated the unification and consistency of this method. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  2. On Efficient Multigrid Methods for Materials Processing Flows with Small Particles

    NASA Technical Reports Server (NTRS)

    Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael

    2004-01-01

    Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.

  3. The efficiency and effectiveness of utilizing diagrams in interviews: an assessment of participatory diagramming and graphic elicitation

    PubMed Central

    Umoquit, Muriah J; Dobrow, Mark J; Lemieux-Charles, Louise; Ritvo, Paul G; Urbach, David R; Wodchis, Walter P

    2008-01-01

    Background This paper focuses on measuring the efficiency and effectiveness of two diagramming methods employed in key informant interviews with clinicians and health care administrators. The two methods are 'participatory diagramming', where the respondent creates a diagram that assists in their communication of answers, and 'graphic elicitation', where a researcher-prepared diagram is used to stimulate data collection. Methods These two diagramming methods were applied in key informant interviews and their value in efficiently and effectively gathering data was assessed based on quantitative measures and qualitative observations. Results Assessment of the two diagramming methods suggests that participatory diagramming is an efficient method for collecting data in graphic form, but may not generate the depth of verbal response that many qualitative researchers seek. In contrast, graphic elicitation was more intuitive, better understood and preferred by most respondents, and often provided more contemplative verbal responses, however this was achieved at the expense of more interview time. Conclusion Diagramming methods are important for eliciting interview data that are often difficult to obtain through traditional verbal exchanges. Subject to the methodological limitations of the study, our findings suggest that while participatory diagramming and graphic elicitation have specific strengths and weaknesses, their combined use can provide complementary information that would not likely occur with the application of only one diagramming method. The methodological insights gained by examining the efficiency and effectiveness of these diagramming methods in our study should be helpful to other researchers considering their incorporation into qualitative research designs. PMID:18691410

  4. Five-Junction Solar Cell Optimization Using Silvaco Atlas

    DTIC Science & Technology

    2017-09-01

    experimental sources [1], [4], [6]. f. Numerical Method The method selected for solving the non -linear equations that make up the simulation can be...and maximize efficiency. Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco...Optimization of solar cell efficiency is carried out via nearly orthogonal balanced design of experiments methodology . Silvaco ATLAS is utilized to

  5. 10 CFR 431.106 - Uniform test method for the measurement of energy efficiency of commercial water heaters and hot...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Uniform test method for the measurement of energy efficiency of commercial water heaters and hot water supply boilers (other than commercial heat pump water heaters). 431.106 Section 431.106 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL...

  6. 10 CFR 431.106 - Uniform test method for the measurement of energy efficiency of commercial water heaters and hot...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uniform test method for the measurement of energy efficiency of commercial water heaters and hot water supply boilers (other than commercial heat pump water heaters). 431.106 Section 431.106 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL...

  7. 10 CFR 431.106 - Uniform test method for the measurement of energy efficiency of commercial water heaters and hot...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Uniform test method for the measurement of energy efficiency of commercial water heaters and hot water supply boilers (other than commercial heat pump water heaters). 431.106 Section 431.106 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL...

  8. 10 CFR 431.106 - Uniform test method for the measurement of energy efficiency of commercial water heaters and hot...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Uniform test method for the measurement of energy efficiency of commercial water heaters and hot water supply boilers (other than commercial heat pump water heaters). 431.106 Section 431.106 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN COMMERCIAL AND INDUSTRIAL...

  9. Simultaneous analysis of 70 pesticides using HPlc/MS/MS: a comparison of the multiresidue method of Klein and Alder and the QuEChERS method.

    PubMed

    Riedel, Melanie; Speer, Karl; Stuke, Sven; Schmeer, Karl

    2010-01-01

    Since 2003, two new multipesticide residue methods for screening crops for a large number of pesticides, developed by Klein and Alder and Anastassiades et al. (Quick, Easy, Cheap, Effective, Rugged, and Safe; QuEChERS), have been published. Our intention was to compare these two important methods on the basis of their extraction efficiency, reproducibility, ruggedness, ease of use, and speed. In total, 70 pesticides belonging to numerous different substance classes were analyzed at two concentration levels by applying both methods, using five different representative matrixes. In the case of the QuEChERS method, the results of the three sample preparation steps (crude extract, extract after SPE, and extract after SPE and acidification) were compared with each other and with the results obtained with the Klein and Alder method. The extraction efficiencies of the QuEChERS method were far higher, and the sample preparation was much quicker when the last two steps were omitted. In most cases, the extraction efficiencies after the first step were approximately 100%. With extraction efficiencies of mostly less than 70%, the Klein and Alder method did not compare favorably. Some analytes caused problems during evaluation, mostly due to matrix influences.

  10. A Graphical Method for Estimation of Barometric Efficiency from Continuous Data - Concepts and Application to a Site in the Piedmont, Air Force Plant 6, Marietta, Georgia

    USGS Publications Warehouse

    Gonthier, Gerard

    2007-01-01

    A graphical method that uses continuous water-level and barometric-pressure data was developed to estimate barometric efficiency. A plot of nearly continuous water level (on the y-axis), as a function of nearly continuous barometric pressure (on the x-axis), will plot as a line curved into a series of connected elliptical loops. Each loop represents a barometric-pressure fluctuation. The negative of the slope of the major axis of an elliptical loop will be the ratio of water-level change to barometric-pressure change, which is the sum of the barometric efficiency plus the error. The negative of the slope of the preferred orientation of many elliptical loops is an estimate of the barometric efficiency. The slope of the preferred orientation of many elliptical loops is approximately the median of the slopes of the major axes of the elliptical loops. If water-level change that is not caused by barometric-pressure change does not correlate with barometric-pressure change, the probability that the error will be greater than zero will be the same as the probability that it will be less than zero. As a result, the negative of the median of the slopes for many loops will be close to the barometric efficiency. The graphical method provided a rapid assessment of whether a well was affected by barometric-pressure change and also provided a rapid estimate of barometric efficiency. The graphical method was used to assess which wells at Air Force Plant 6, Marietta, Georgia, had water levels affected by barometric-pressure changes during a 2003 constant-discharge aquifer test. The graphical method was also used to estimate barometric efficiency. Barometric-efficiency estimates from the graphical method were compared to those of four other methods: average of ratios, median of ratios, Clark, and slope. The two methods (the graphical and median-of-ratios methods) that used the median values of water-level change divided by barometric-pressure change appeared to be most resistant to error caused by barometric-pressure-independent water-level change. The graphical method was particularly resistant to large amounts of barometric-pressure-independent water-level change, having an average and standard deviation of error for control wells that was less than one-quarter that of the other four methods. When using the graphical method, it is advisable that more than one person select the slope or that the same person fits the same data several times to minimize the effect of subjectivity. Also, a long study period should be used (at least 60 days) to ensure that loops affected by large amounts of barometric-pressure-independent water-level change do not significantly contribute to error in the barometric-efficiency estimate.

  11. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  12. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  13. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  14. Efficient simulation of intensity profile of light through subpixel-matched lenticular lens array for two- and four-view auto-stereoscopic liquid-crystal display.

    PubMed

    Chang, Yia-Chung; Tang, Li-Chuan; Yin, Chun-Yi

    2013-01-01

    Both an analytical formula and an efficient numerical method for simulation of the accumulated intensity profile of light that is refracted through a lenticular lens array placed on top of a liquid-crystal display (LCD) are presented. The influence due to light refracted through adjacent lens is examined in the two-view and four-view systems. Our simulation results are in good agreement with those obtained by a piece of commercial software, ASAP, but our method is much more efficient. This proposed method allows one to adjust the design parameters and carry out simulation for the performance of a subpixel-matched auto-stereoscopic LCD more efficiently and easily.

  15. Efficient computation of photonic crystal waveguide modes with dispersive material.

    PubMed

    Schmidt, Kersten; Kappeler, Roman

    2010-03-29

    The optimization of PhC waveguides is a key issue for successfully designing PhC devices. Since this design task is computationally expensive, efficient methods are demanded. The available codes for computing photonic bands are also applied to PhC waveguides. They are reliable but not very efficient, which is even more pronounced for dispersive material. We present a method based on higher order finite elements with curved cells, which allows to solve for the band structure taking directly into account the dispersiveness of the materials. This is accomplished by reformulating the wave equations as a linear eigenproblem in the complex wave-vectors k. For this method, we demonstrate the high efficiency for the computation of guided PhC waveguide modes by a convergence analysis.

  16. Hot Fusion: an efficient method to clone multiple DNA fragments as well as inverted repeats without ligase.

    PubMed

    Fu, Changlin; Donovan, William P; Shikapwashya-Hasser, Olga; Ye, Xudong; Cole, Robert H

    2014-01-01

    Molecular cloning is utilized in nearly every facet of biological and medical research. We have developed a method, termed Hot Fusion, to efficiently clone one or multiple DNA fragments into plasmid vectors without the use of ligase. The method is directional, produces seamless junctions and is not dependent on the availability of restriction sites for inserts. Fragments are assembled based on shared homology regions of 17-30 bp at the junctions, which greatly simplifies the construct design. Hot Fusion is carried out in a one-step, single tube reaction at 50 °C for one hour followed by cooling to room temperature. In addition to its utility for multi-fragment assembly Hot Fusion provides a highly efficient method for cloning DNA fragments containing inverted repeats for applications such as RNAi. The overall cloning efficiency is in the order of 90-95%.

  17. Hot Fusion: An Efficient Method to Clone Multiple DNA Fragments as Well as Inverted Repeats without Ligase

    PubMed Central

    Fu, Changlin; Donovan, William P.; Shikapwashya-Hasser, Olga; Ye, Xudong; Cole, Robert H.

    2014-01-01

    Molecular cloning is utilized in nearly every facet of biological and medical research. We have developed a method, termed Hot Fusion, to efficiently clone one or multiple DNA fragments into plasmid vectors without the use of ligase. The method is directional, produces seamless junctions and is not dependent on the availability of restriction sites for inserts. Fragments are assembled based on shared homology regions of 17–30 bp at the junctions, which greatly simplifies the construct design. Hot Fusion is carried out in a one-step, single tube reaction at 50°C for one hour followed by cooling to room temperature. In addition to its utility for multi-fragment assembly Hot Fusion provides a highly efficient method for cloning DNA fragments containing inverted repeats for applications such as RNAi. The overall cloning efficiency is in the order of 90–95%. PMID:25551825

  18. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  19. An assessment of the efficiency of fungal DNA extraction methods for maximizing the detection of medically important fungi using PCR.

    PubMed

    Karakousis, A; Tan, L; Ellis, D; Alexiou, H; Wormald, P J

    2006-04-01

    To date, no single reported DNA extraction method is suitable for the efficient extraction of DNA from all fungal species. The efficiency of extraction is of particular importance in PCR-based medical diagnostic applications where the quantity of fungus in a tissue biopsy may be limited. We subjected 16 medically relevant fungi to physical, chemical and enzymatic cell wall disruption methods which constitutes the first step in extracting DNA. Examination by light microscopy showed that grinding with mortar and pestle was the most efficient means of disrupting the rigid fungal cell walls of hyphae and conidia. We then trialled several published DNA isolation protocols to ascertain the most efficient method of extraction. Optimal extraction was achieved by incorporating a lyticase and proteinase K enzymatic digestion step and adapting a DNA extraction procedure from a commercial kit (MO BIO) to generate high yields of high quality DNA from all 16 species. DNA quality was confirmed by the successful PCR amplification of the conserved region of the fungal 18S small-subunit rRNA multicopy gene.

  20. Evaluation Method for Fieldlike-Torque Efficiency by Modulation of the Resonance Field

    NASA Astrophysics Data System (ADS)

    Kim, Changsoo; Kim, Dongseuk; Chun, Byong Sun; Moon, Kyoung-Woong; Hwang, Chanyong

    2018-05-01

    The spin Hall effect has attracted a lot of interest in spintronics because it offers the possibility of a faster switching route with an electric current than with a spin-transfer-torque device. Recently, fieldlike spin-orbit torque has been shown to play an important role in the magnetization switching mechanism. However, there is no simple method for observing the fieldlike spin-orbit torque efficiency. We suggest a method for measuring fieldlike spin-orbit torque using a linear change in the resonance field in spectra of direct-current (dc)-tuned spin-torque ferromagnetic resonance. The fieldlike spin-orbit torque efficiency can be obtained in both a macrospin simulation and in experiments by simply subtracting the Oersted field from the shifted amount of resonance field. This method analyzes the effect of fieldlike torque using dc in a normal metal; therefore, only the dc resistivity and the dimensions of each layer are considered in estimating the fieldlike spin-torque efficiency. The evaluation of fieldlike-torque efficiency of a newly emerging material by modulation of the resonance field provides a shortcut in the development of an alternative magnetization switching device.

Top