Science.gov

Sample records for adjoint solution algorithm

  1. GPU-Accelerated Adjoint Algorithmic Differentiation

    PubMed Central

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2015-01-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography

  2. GPU-accelerated adjoint algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  3. An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.

    2003-01-01

    An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.

  4. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  5. Solution of the self-adjoint radiative transfer equation on hybrid computer systems

    NASA Astrophysics Data System (ADS)

    Gasilov, V. A.; Kuchugov, P. A.; Olkhovskaya, O. G.; Chetverushkin, B. N.

    2016-06-01

    A new technique for simulating three-dimensional radiative energy transfer for the use in the software designed for the predictive simulation of plasma with high energy density on parallel computers is proposed. A highly scalable algorithm that takes into account the angular dependence of the radiation intensity and is free of the ray effect is developed based on the solution of a second-order equation with a self-adjoint operator. A distinctive feature of this algorithm is a preliminary transformation of rotation to eliminate mixed derivatives with respect to the spatial variables, simplify the structure of the difference operator, and accelerate the convergence of the iterative solution of the equation. It is shown that the proposed method correctly reproduces the limiting cases—isotropic radiation and the directed radiation with a δ-shaped angular distribution.

  6. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, L.F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  7. Nonlinear self-adjointness and invariant solutions of a 2D Rossby wave equation

    NASA Astrophysics Data System (ADS)

    Cimpoiasu, Rodica; Constantinescu, Radu

    2014-02-01

    The paper investigates the nonlinear self-adjointness of the nonlinear inviscid barotropic nondivergent vorticity equation in a beta-plane. It is a particular form of Rossby equation which does not possess variational structure and it is studied using a recently method developed by Ibragimov. The conservation laws associated with the infinite-dimensional symmetry Lie algebra models are constructed and analyzed. Based on this Lie algebra, some classes of similarity invariant solutions with nonconstant linear and nonlinear shears are obtained. It is also shown how one of the conservation laws generates a particular wave solution of this equation.

  8. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  9. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  10. Spectral Solutions of Self-adjoint Elliptic Problems with Immersed Interfaces

    SciTech Connect

    Auchmuty, G.; Kloucek, P.

    2011-12-15

    This paper describes a spectral representation of solutions of self-adjoint elliptic problems with immersed interfaces. The interface is assumed to be a simple non-self-intersecting closed curve that obeys some weak regularity conditions. The problem is decomposed into two problems, one with zero interface data and the other with zero exterior boundary data. The problem with zero interface data is solved by standard spectral methods. The problem with non-zero interface data is solved by introducing an interface space H{sub {Gamma}}({Omega}) and constructing an orthonormal basis of this space. This basis is constructed using a special class of orthogonal eigenfunctions analogously to the methods used for standard trace spaces by Auchmuty (SIAM J. Math. Anal. 38, 894-915, 2006). Analytical and numerical approximations of these eigenfunctions are described and some simulations are presented.

  11. Adjoint-weighted variational formulation for a direct computational solution of an inverse heat conduction problem

    NASA Astrophysics Data System (ADS)

    Barbone, Paul E.; Oberai, Assad A.; Harari, Isaac

    2007-12-01

    We consider the direct (i.e. non-iterative) solution of the inverse problem of heat conduction for which at least two interior temperature fields are available. The strong form of the problem for the single, unknown, thermal conductivity field is governed by two partial differential equations of pure advective transport. The given temperature fields must satisfy a compatibility condition for the problem to have a solution. We introduce a novel variational formulation, the adjoint-weighted equation (AWE), for solving the two-field problem. In this case, the gradients of two given temperature fields must be linearly independent in the entire domain, a weaker condition than the compatibility required by the strong form. We show that the solution of the AWE formulation is equivalent to that of the strong form when both are well posed. We prove that the Galerkin discretization of the AWE formulation leads to a stable, convergent numerical method that has optimal rates of convergence. We show computational examples that confirm these optimal rates. The AWE formulation shows good numerical performance on problems with both smooth and rough coefficients and solutions.

  12. Self-adjoint extensions of the Dirac Hamiltonian in the magnetic-solenoid field and related exact solutions

    SciTech Connect

    Gavrilov, S.P.; Gitman, D.M.; Smirnov, A.A.

    2003-02-01

    We study solutions of Dirac equation in the field of Aharonov-Bohm solenoid and a collinear uniform magnetic field. On this base we construct self-adjoint extensions of the Dirac Hamiltonian using von Neumann's theory of deficiency indices. We reduce (3+1)-dimensional problem to (2+1)-dimensional one by a proper choice of spin operator. Then we study the problem doing a finite radius regularization of the solenoid field. We exploit solutions of the latter problem to specify boundary conditions in the singular case.

  13. Nonlinear self-adjointness, conservation laws, and the construction of solutions of partial differential equations using conservation laws

    NASA Astrophysics Data System (ADS)

    Ibragimov, N. Kh; Avdonina, E. D.

    2013-10-01

    The method of nonlinear self-adjointness, which was recently developed by the first author, gives a generalization of Noether's theorem. This new method significantly extends approaches to constructing conservation laws associated with symmetries, since it does not require the existence of a Lagrangian. In particular, it can be applied to any linear equations and any nonlinear equations that possess at least one local conservation law. The present paper provides a brief survey of results on conservation laws which have been obtained by this method and published mostly in recent preprints of the authors, along with a method for constructing exact solutions of systems of partial differential equations with the use of conservation laws. In most cases the solutions obtained by the method of conservation laws cannot be found as invariant or partially invariant solutions. Bibliography: 23 titles.

  14. A finite-volume Eulerian-Lagrangian localized adjoint method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods for solute transport problems that are dominated by advection. FVELLAM systematically conserves mass globally with all types of boundary conditions. Integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking of characteristic lines intersecting inflow boundaries. FVELLAM extends previous results by obtaining mass conservation locally on Lagrangian space-time elements. -from Authors

  15. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  16. Mesh-free adjoint methods for nonlinear filters

    NASA Astrophysics Data System (ADS)

    Daum, Fred

    2005-09-01

    We apply a new industrial strength numerical approximation, called the "mesh-free adjoint method", to solve the nonlinear filtering problem. This algorithm exploits the smoothness of the problem, unlike particle filters, and hence we expect that mesh-free adjoints are superior to particle filters for many practical applications. The nonlinear filter problem is equivalent to solving the Fokker-Planck equation in real time. The key idea is to use a good adaptive non-uniform quantization of state space to approximate the solution of the Fokker-Planck equation. In particular, the adjoint method computes the location of the nodes in state space to minimize errors in the final answer. This use of an adjoint is analogous to optimal control algorithms, but it is more interesting. The adjoint method is also analogous to importance sampling in particle filters, but it is better for four reasons: (1) it exploits the smoothness of the problem; (2) it explicitly minimizes the errors in the relevant functional; (3) it explicitly models the dynamics in state space; and (4) it can be used to compute a corrected value for the desired functional using the residuals. We will attempt to make this paper accessible to normal engineers who do not have PDEs for breakfast.

  17. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  18. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  19. Extraction of macroscopic and microscopic adjoint concepts using a lattice Boltzmann method and discrete adjoint approach.

    PubMed

    Hekmat, Mohamad Hamed; Mirzaei, Masoud

    2015-01-01

    In the present research, we tried to improve the performance of the lattice Boltzmann (LB) -based adjoint approach by utilizing the mesoscopic inherent of the LB method. In this regard, two macroscopic discrete adjoint (MADA) and microscopic discrete adjoint (MIDA) approaches are used to answer the following two challenging questions. Is it possible to extend the concept of the macroscopic and microscopic variables of the flow field to the corresponding adjoint ones? Further, similar to the conservative laws in the LB method, is it possible to find the comparable conservation equations in the adjoint approach? If so, then a definite framework, similar to that used in the flow solution by the LB method, can be employed in the flow sensitivity analysis by the MIDA approach. This achievement can decrease the implementation cost and coding efforts of the MIDA method in complicated sensitivity analysis problems. First, the MADA and MIDA equations are extracted based on the LB method using the duality viewpoint. Meanwhile, using an elementary case, inverse design of a two-dimensional unsteady Poiseuille flow in a periodic channel with constant body forces, the procedure of analytical evaluation of the adjoint variables is described. The numerical results show that similar correlations between the distribution functions can be seen between the corresponding adjoint ones. Besides, the results are promising, emphasizing the flow field adjoint variables can be evaluated via the adjoint distribution functions. Finally, the adjoint conservative laws are introduced. PMID:25679735

  20. Double-difference Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Tromp, J.

    2015-12-01

    We introduce the "double-difference" method, hugely popular in source inversion, in adjoint tomography. Differences between seismic observations and simulations may be explained in terms of many factors besides structural heterogeneity, e.g., errors in the source-time function, inaccurate timing, and systematic uncertainties. To alleviate nonuniqueness in the inverse problem, we make a differential measurement between stations, which largely cancels out the source signature and systematic errors. We seek to minimize the difference between differential measurements of observations and simulations at distinct stations. We show how to implement the double-difference concept in adjoint tomography, both theoretically and in practice. In contrast to conventional inversions aiming to maximize absolute agreement between observations and simulations, by differencing pairs of measurements at distinct locations, we obtain gradients of the new differential misfit function with respect to structural perturbations which are relatively insensitive to an incorrect source signature or timing errors. Furthermore, we analyze sensitivities of absolute and differential measurements. The former provide absolute information on structure along the ray paths between stations and sources, whereas the latter explain relative (and thus high-resolution) structural variations in areas close to the stations. In conventional tomography, one earthquake provides very limited structural resolution, as reflected in a misfit gradient consisting of "streaks" between the stations and the source. In double-difference tomography, one earthquake can actually resolve significant details of the structure, i.e., the double-differences provide a hugely powerful constraint on structural variations. Algorithmically, we incorporate the double-difference concept into the conventional adjoint tomography workflow by simply pairing up all regular measurements. Thus, the computational cost of the related adjoint

  1. Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2015-04-01

    We will present our initial results of global adjoint tomography based on 3D seismic wave simulations which is one of the most challenging examples in seismology in terms of intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated in inversions. Using a spectral-element method, we incorporate full 3D wave propagation in seismic tomography by running synthetic seismograms and adjoint simulations to compute exact sensitivity kernels in realistic 3D background models. We run our global simulations on the Oak Ridge National Laboratory's Cray XK7 "Titan" system taking advantage of the GPU version of the SPECFEM3D_GLOBE package. We have started iterations with initially selected 253 earthquakes within the magnitude range of 5.5 < Mw < 7.0 and numerical simulations having resolution down to ~27 s to invert for a transversely isotropic crust and mantle model using a non-linear conjugate gradient algorithm. The measurements are currently based on frequency-dependent traveltime misfits. We use both minor- and major-arc body and surface waves by running 200 min simulations where inversions are performed with more than 2.6 million measurements. Our initial results after 12 iterations already indicate several prominent features such as enhanced slab (e.g., Hellenic, Japan, Bismarck, Sandwich), plume/hotspot (e.g., the Pacific superplume, Caroline, Yellowstone, Hawaii) images, etc. To improve the resolution and ray coverage, particularly in the lower mantle, our aim is to increase the resolution of numerical simulations first going down to ~17 s and then to ~9 s to incorporate high-frequency body waves in inversions. While keeping track of the progress and illumination of features in our models with a limited data set, we work towards to assimilate all available data in inversions from all seismic networks and earthquakes in the global CMT catalogue.

  2. On the adjoint operator in photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Betcke, Marta M.; Cox, Ben T.; Lucka, Felix; Treeby, Brad E.

    2016-11-01

    Photoacoustic tomography (PAT) is an emerging biomedical imaging from coupled physics technique, in which the image contrast is due to optical absorption, but the information is carried to the surface of the tissue as ultrasound pulses. Many algorithms and formulae for PAT image reconstruction have been proposed for the case when a complete data set is available. In many practical imaging scenarios, however, it is not possible to obtain the full data, or the data may be sub-sampled for faster data acquisition. In such cases, image reconstruction algorithms that can incorporate prior knowledge to ameliorate the loss of data are required. Hence, recently there has been an increased interest in using variational image reconstruction. A crucial ingredient for the application of these techniques is the adjoint of the PAT forward operator, which is described in this article from physical, theoretical and numerical perspectives. First, a simple mathematical derivation of the adjoint of the PAT forward operator in the continuous framework is presented. Then, an efficient numerical implementation of the adjoint using a k-space time domain wave propagation model is described and illustrated in the context of variational PAT image reconstruction, on both 2D and 3D examples including inhomogeneous sound speed. The principal advantage of this analytical adjoint over an algebraic adjoint (obtained by taking the direct adjoint of the particular numerical forward scheme used) is that it can be implemented using currently available fast wave propagation solvers.

  3. Solution of the advection-dispersion equation in two dimensions by a finite-volume Eulerian-Lagrangian localized adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1998-01-01

    We extend the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) for solution of the advection-dispersion equation to two dimensions. The method can conserve mass globally and is not limited by restrictions on the size of the grid Peclet or Courant number. Therefore, it is well suited for solution of advection-dominated ground-water solute transport problems. In test problem comparisons with standard finite differences, FVELLAM is able to attain accurate solutions on much coarser space and time grids. On fine grids, the accuracy of the two methods is comparable. A critical aspect of FVELLAM (and all other ELLAMs) is evaluation of the mass storage integral from the preceding time level. In FVELLAM this may be accomplished with either a forward or backtracking approach. The forward tracking approach conserves mass globally and is the preferred approach. The backtracking approach is less computationally intensive, but not globally mass conservative. Boundary terms are systematically represented as integrals in space and time which are evaluated by a common integration scheme in conjunction with forward tracking through time. Unlike the one-dimensional case, local mass conservation cannot be guaranteed, so slight oscillations in concentration can develop, particularly in the vicinity of inflow or outflow boundaries. Published by Elsevier Science Ltd.

  4. Finite Element Solution of the Self-Adjoint Angular Flux Equation for Coupled Electron-Photon Transport

    SciTech Connect

    Liscum-Powell, Jennifer L.; Prinja, Anil B.; Morel, Jim E.; Lorence, Leonard J Jr.

    2002-11-15

    A novel approach is proposed for charged particle transport calculations using a recently developed second-order, self-adjoint angular flux (SAAF) form of the Boltzmann transport equation with continuous slowing-down. A finite element discretization that is linear continuous in space and linear discontinuous (LD) in energy is described and implemented in a one-dimensional, planar geometry, multigroup, discrete ordinates code for charged particle transport. The cross-section generating code CEPXS is used to generate the electron and photon transport cross sections employed in this code. The discrete ordinates SAAF transport equation is solved using source iteration in conjunction with an inner iteration acceleration scheme and an outer iteration acceleration scheme. Outer iterations are required with the LD energy discretization scheme because the two angular flux unknowns within each group are coupled, which gives rise to effective upscattering. The inner iteration convergence is accelerated using diffusion synthetic acceleration, and the outer iteration convergence is accelerated using a diamond difference approximation to the LD energy discretization. Computational results are given that demonstrate the effectiveness of our convergence acceleration schemes and the accuracy of our discretized SAAF equation.

  5. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  6. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  7. Generalized uncertainty principle and self-adjoint operators

    SciTech Connect

    Balasubramanian, Venkat; Das, Saurya; Vagenas, Elias C.

    2015-09-15

    In this work we explore the self-adjointness of the GUP-modified momentum and Hamiltonian operators over different domains. In particular, we utilize the theorem by von-Neumann for symmetric operators in order to determine whether the momentum and Hamiltonian operators are self-adjoint or not, or they have self-adjoint extensions over the given domain. In addition, a simple example of the Hamiltonian operator describing a particle in a box is given. The solutions of the boundary conditions that describe the self-adjoint extensions of the specific Hamiltonian operator are obtained.

  8. Towards Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Zhu, H.; Peter, D. B.; Tromp, J.

    2012-12-01

    Seismic tomography is at a stage where we can harness entire seismograms using the opportunities offered by advances in numerical wave propagation solvers and high-performance computing. Adjoint methods provide an efficient way for incorporating full nonlinearity of wave propagation and 3D Fréchet kernels in iterative seismic inversions which have so far given promising results at continental and regional scales. Our goal is to take adjoint tomography forward to image the entire planet. Using an iterative conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. We have started with around 255 global CMT events having moment magnitudes between 5.8 and 7, and used GSN stations as well as some local networks such as USArray, European stations etc. Prior to the structure inversion, we reinvert global CMT solutions by computing Green functions in our 3D reference model to take into account effects of crustal variations on source parameters. Using the advantages of numerical simulations, our strategy is to invert crustal and mantle structure together to avoid any bias introduced into upper-mantle images due to "crustal corrections", which are commonly used in classical tomography. 3D simulations dramatically increase the usable amount of data so that, with the current earthquake-station setup, we perform each iteration with more than two million measurements. Multi-resolution smoothing based on ray density is applied to the gradient to better deal with the imperfect source-station distribution on the globe and extract more information underneath regions with dense ray coverage and vice versa. Similar to frequency domain approach, we reduce nonlinearities by starting from long periods and gradually increase the frequency content of data after successive model updates. To simplify the problem, we primarily focus on the elastic structure and therefore our measurements are based on

  9. Towards efficient backward-in-time adjoint computations using data compression techniques

    SciTech Connect

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for the difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.

  10. Towards efficient backward-in-time adjoint computations using data compression techniques

    DOE PAGES

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less

  11. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  12. Application of adjoint operators to neural learning

    NASA Technical Reports Server (NTRS)

    Barhen, J.; Toomarian, N.; Gulati, S.

    1990-01-01

    A technique for the efficient analytical computation of such parameters of the neural architecture as synaptic weights and neural gain is presented as a single solution of a set of adjoint equations. The learning model discussed concentrates on the adiabatic approximation only. A problem of interest is represented by a system of N coupled equations, and then adjoint operators are introduced. A neural network is formalized as an adaptive dynamical system whose temporal evolution is governed by a set of coupled nonlinear differential equations. An approach based on the minimization of a constrained neuromorphic energylike function is applied, and the complete learning dynamics are obtained as a result of the calculations.

  13. An exact and consistent adjoint method for high-fidelity discretization of the compressible flow equations

    NASA Astrophysics Data System (ADS)

    Subramanian, Ramanathan Vishnampet Ganapathi

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvement. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs. Such methods have enabled sensitivity analysis and active control of turbulence at engineering flow conditions by providing gradient information at computational cost comparable to that of simulating the flow. They accelerate convergence of numerical design optimization algorithms, though this is predicated on the availability of an accurate gradient of the discretized flow equations. This is challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. We analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space--time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge--Kutta-like scheme

  14. Adjoint affine fusion and tadpoles

    NASA Astrophysics Data System (ADS)

    Urichuk, Andrew; Walton, Mark A.

    2016-06-01

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.

  15. Optimization of the Direct Discrete Method Using the Solution of the Adjoint Equation and its Application in the Multi-Group Neutron Diffusion Equation

    SciTech Connect

    Ayyoubzadeh, Seyed Mohsen; Vosoughi, Naser

    2011-09-14

    Obtaining the set of algebraic equations that directly correspond to a physical phenomenon has been viable in the recent direct discrete method (DDM). Although this method may find its roots in physical and geometrical considerations, there are still some degrees of freedom that one may suspect optimize-able. Here we have used the information embedded in the corresponding adjoint equation to form a local functional, which in turn by its minimization, yield suitable dual mesh positioning.

  16. MCNP: Multigroup/adjoint capabilities

    SciTech Connect

    Wagner, J.C.; Redmond, E.L. II; Palmtag, S.P.; Hendricks, J.S.

    1994-04-01

    This report discusses various aspects related to the use and validity of the general purpose Monte Carlo code MCNP for multigroup/adjoint calculations. The increased desire to perform comparisons between Monte Carlo and deterministic codes, along with the ever-present desire to increase the efficiency of large MCNP calculations has produced a greater user demand for the multigroup/adjoint capabilities. To more fully utilize these capabilities, we review the applications of the Monte Carlo multigroup/adjoint method, describe how to generate multigroup cross sections for MCNP with the auxiliary CRSRD code, describe how to use the multigroup/adjoint capability in MCNP, and provide examples and results indicating the effectiveness and validity of the MCNP multigroup/adjoint treatment. This information should assist users in taking advantage of the MCNP multigroup/adjoint capabilities.

  17. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and

  18. Forward and adjoint sensitivity computation of chaotic dynamical systems

    SciTech Connect

    Wang, Qiqi

    2013-02-15

    This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged “statistical” quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor.

  19. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method.

  20. Algorithm For Solution Of Subset-Regression Problems

    NASA Technical Reports Server (NTRS)

    Verhaegen, Michel

    1991-01-01

    Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.

  1. Estimation of historical groundwater contaminant distribution using the adjoint state method applied to geostatistical inverse modeling

    NASA Astrophysics Data System (ADS)

    Michalak, Anna M.; Kitanidis, Peter K.

    2004-08-01

    As the incidence of groundwater contamination continues to grow, a number of inverse modeling methods have been developed to address forensic groundwater problems. In this work the geostatistical approach to inverse modeling is extended to allow for the recovery of the antecedent distribution of a contaminant at a given point back in time, which is critical to the assessment of historical exposure to contamination. Such problems are typically strongly underdetermined, with a large number of points at which the distribution is to be estimated. To address this challenge, the computational efficiency of the new method is increased through the application of the adjoint state method. In addition, the adjoint problem is presented in a format that allows for the reuse of existing groundwater flow and transport codes as modules in the inverse modeling algorithm. As demonstrated in the presented applications, the geostatistical approach combined with the adjoint state method allow for a historical multidimensional contaminant distribution to be recovered even in heterogeneous media, where a numerical solution is required for the forward problem.

  2. Nonlinear self-adjointness and conservation laws

    NASA Astrophysics Data System (ADS)

    Ibragimov, N. H.

    2011-10-01

    The general concept of nonlinear self-adjointness of differential equations is introduced. It includes the linear self-adjointness as a particular case. Moreover, it embraces the strict self-adjointness (definition 1) and quasi-self-adjointness introduced earlier by the author. It is shown that the equations possessing nonlinear self-adjointness can be written equivalently in a strictly self-adjoint form by using appropriate multipliers. All linear equations possess the property of nonlinear self-adjointness, and hence can be rewritten in a nonlinear strictly self-adjoint form. For example, the heat equation ut - Δu = 0 becomes strictly self-adjoint after multiplying by u-1. Conservation laws associated with symmetries are given in an explicit form for all nonlinearly self-adjoint partial differential equations and systems.

  3. A new mathematical adjoint for the modified SAAF-SN equations

    SciTech Connect

    Schunert, Sebastian; Wang, Yaqi; Martineau, Richard; DeHart, Mark D.

    2015-01-01

    We present a new adjoint FEM weak form, which can be directly used for evaluating the mathematical adjoint, suitable for perturbation calculations, of the self-adjoint angular flux SN equations (SAAF-SN) without construction and transposition of the underlying coefficient matrix. Stabilization schemes incorporated in the described SAAF-SN method make the mathematical adjoint distinct from the physical adjoint, i.e. the solution of the continuous adjoint equation with SAAF-SN . This weak form is implemented into RattleSnake, the MOOSE (Multiphysics Object-Oriented Simulation Environment) based transport solver. Numerical results verify the correctness of the implementation and show its utility both for fixed source and eigenvalue problems.

  4. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design

  5. Multigrid methods for bifurcation problems: The self adjoint case

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1987-01-01

    This paper deals with multigrid methods for computational problems that arise in the theory of bifurcation and is restricted to the self adjoint case. The basic problem is to solve for arcs of solutions, a task that is done successfully with an arc length continuation method. Other important issues are, for example, detecting and locating singular points as part of the continuation process, switching branches at bifurcation points, etc. Multigrid methods have been applied to continuation problems. These methods work well at regular points and at limit points, while they may encounter difficulties in the vicinity of bifurcation points. A new continuation method that is very efficient also near bifurcation points is presented here. The other issues mentioned above are also treated very efficiently with appropriate multigrid algorithms. For example, it is shown that limit points and bifurcation points can be solved for directly by a multigrid algorithm. Moreover, the algorithms presented here solve the corresponding problems in just a few work units (about 10 or less), where a work unit is the work involved in one local relaxation on the finest grid.

  6. Nonradiating sources with connections to the adjoint problem

    SciTech Connect

    Marengo, Edwin A.; Devaney, Anthony J.

    2004-09-01

    A general description of localized nonradiating (NR) sources whose generated fields are confined (nonzero only) within the source's support is developed that is applicable to any linear partial differential equation (PDE) including the usual PDEs of wave theory (e.g., the Helmholtz equation and the vector wave equation) as well as other PDEs arising in other disciplines. This description, which holds for both formally self-adjoint and non-self-adjoint linear partial differential operators (PDOs), is derived in the context of both the governing PDE and the corresponding adjoint PDE of the associated adjoint problem. It is shown that a necessary and sufficient condition for a source to be NR is that it obeys an orthogonality relation with respect to any solution in the source's support of the corresponding homogeneous adjoint PDE. For real linear PDOs, this description takes on a more relaxed form where, in addition to the previous necessary and sufficient condition, the obeying of a complementary orthogonality relation with respect to any solution in the source's support of the homogeneous form of the same governing PDE is also both necessary and sufficient for the source to be NR.

  7. Learning a trajectory using adjoint functions and teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad B.; Barhen, Jacob

    1992-01-01

    A new methodology for faster supervised temporal learning in nonlinear neural networks is presented which builds upon the concept of adjoint operators to allow fast computation of the gradients of an error functional with respect to all parameters of the neural architecture, and exploits the concept of teacher forcing to incorporate information on the desired output into the activation dynamics. The importance of the initial or final time conditions for the adjoint equations is discussed. A new algorithm is presented in which the adjoint equations are solved simultaneously (i.e., forward in time) with the activation dynamics of the neural network. We also indicate how teacher forcing can be modulated in time as learning proceeds. The results obtained show that the learning time is reduced by one to two orders of magnitude with respect to previously published results, while trajectory tracking is significantly improved. The proposed methodology makes hardware implementation of temporal learning attractive for real-time applications.

  8. Double-Difference Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Yanhua O.; Simons, Frederik J.; Tromp, Jeroen

    2016-04-01

    We introduce a double-difference method for the inversion of seismic wavespeed structure by adjoint tomography. Differences between seismic observations and model-based predictions at individual stations may arise from factors other than structural heterogeneity, such as errors in the assumed source-time function, inaccurate timings, and systematic uncertainties. To alleviate the corresponding nonuniqueness in the inverse problem, we construct differential measurements between stations, thereby largely canceling out the source signature and systematic errors. We minimize the discrepancy between observations and simulations in terms of differential measurements made on station pairs. We show how to implement the double-difference concept in adjoint tomography, both theoretically and in practice. We compare the sensitivities of absolute and differential measurements. The former provide absolute information on structure along the ray paths between stations and sources, whereas the latter explain relative (and thus higher-resolution) structural variations in areas close to the stations. Whereas in conventional tomography, a measurement made on a single earthquake-station pair provides very limited structural information, in double-difference tomography, one earthquake can actually resolve significant details of the structure. The double-difference methodology can be incorporated into the usual adjoint tomography workflow by simply pairing up all conventional measurements; the computational cost of the necessary adjoint simulations is largely unaffected. Rather than adding to the computational burden, the inversion of double-difference measurements merely modifies the construction of the adjoint sources for data assimilation.

  9. On the Multilevel Solution Algorithm for Markov Chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  10. On the multi-level solution algorithm for Markov chains

    SciTech Connect

    Horton, G.

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  11. A Posteriori Analysis for Hydrodynamic Simulations Using Adjoint Methodologies

    SciTech Connect

    Woodward, C S; Estep, D; Sandelin, J; Wang, H

    2009-02-26

    This report contains results of analysis done during an FY08 feasibility study investigating the use of adjoint methodologies for a posteriori error estimation for hydrodynamics simulations. We developed an approach to adjoint analysis for these systems through use of modified equations and viscosity solutions. Targeting first the 1D Burgers equation, we include a verification of the adjoint operator for the modified equation for the Lax-Friedrichs scheme, then derivations of an a posteriori error analysis for a finite difference scheme and a discontinuous Galerkin scheme applied to this problem. We include some numerical results showing the use of the error estimate. Lastly, we develop a computable a posteriori error estimate for the MAC scheme applied to stationary Navier-Stokes.

  12. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  13. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  14. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  15. Variational nodal solution algorithms for multigroup criticality problems

    SciTech Connect

    Carrico, C.B.; Lewis, E.E.

    1991-01-01

    Variational nodal transport methods are generalized for the treatment of multigroup criticality problems. The generation of variational response matrices is streamlined and automated through the use of symbolic manipulation. A new red-black partitioned matrix algorithm for the solution of the within-group equations is formulated and shown to be at once both a regular matrix splitting and a synthetic acceleration method. The methods are implemented in X- Y geometry as a module of the Argonne National Laboratory code DIF3D. For few group problems highly accurate P[sub 3] eigenvalues are obtained with computing times comparable to those of an existing interface-current nodal transport method.

  16. Pseudo-updated constrained solution algorithm for nonlinear heat conduction

    NASA Technical Reports Server (NTRS)

    Tovichakchaikul, S.; Padovan, J.

    1983-01-01

    This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.

  17. Weak self-adjoint differential equations

    NASA Astrophysics Data System (ADS)

    Gandarias, M. L.

    2011-07-01

    The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006 J. Math. Anal. Appl. 318 742-57 2007 Arch. ALGA 4 55-60). In Ibragimov (2007 J. Math. Anal. Appl. 333 311-28), a general theorem on conservation laws was proved. In this paper, we generalize the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. We find a class of weak self-adjoint quasi-linear parabolic equations. The property of a differential equation to be weak self-adjoint is important for constructing conservation laws associated with symmetries of the differential equation.

  18. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  19. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  20. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat

  1. An algorithm for enforcement of contact constraints in quasistatic applications using matrix-free solution algorithms

    SciTech Connect

    Heinstein, M.W.

    1997-10-01

    A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.

  2. The development of solution algorithms for compressible flows

    NASA Astrophysics Data System (ADS)

    Slack, David Christopher

    Three main topics were examined. The first is the development and comparison of time integration schemes on 2-D unstructured meshes. Both explicit and implicit solution grids are presented. Cell centered and cell vertex finite volume upwind schemes using Roe's approximate Riemann solver are developed. The second topic involves an interactive adaptive remeshing algorithm which uses a frontal grid generator and is compared to a single grid calculation. The final topic examined is the capabilities developed for a structured 3-D code called GASP. The capabilities include: generalized chemistry and thermodynamic modeling, space marching, memory management through the use of binary C I/O, and algebraic and two equation eddy viscosity turbulence modeling. Results are given for Mach 1.7 3-D analytic forebody, a Mach 1.38 axisymmetric nozzle with hydrogen-air combustion, a Mach 14.15 deg ramp, and Mach 0.3 viscous flow over a flat plate.

  3. Self-adjointness of deformed unbounded operators

    SciTech Connect

    Much, Albert

    2015-09-15

    We consider deformations of unbounded operators by using the novel construction tool of warped convolutions. By using the Kato-Rellich theorem, we show that unbounded self-adjoint deformed operators are self-adjoint if they satisfy a certain condition. This condition proves itself to be necessary for the oscillatory integral to be well-defined. Moreover, different proofs are given for self-adjointness of deformed unbounded operators in the context of quantum mechanics and quantum field theory.

  4. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  5. Adjoint sensitivity study on idealized explosive cyclogenesis

    NASA Astrophysics Data System (ADS)

    Chu, Kekuan; Zhang, Yi

    2016-06-01

    The adjoint sensitivity related to explosive cyclogenesis in a conditionally unstable atmosphere is investigated in this study. The PSU/NCAR limited-area, nonhydrostatic primitive equation numerical model MM5 and its adjoint system are employed for numerical simulation and adjoint computation, respectively. To ensure the explosive development of a baroclinic wave, the forecast model is initialized with an idealized condition including an idealized two-dimensional baroclinic jet with a balanced three-dimensional moderate-amplitude disturbance, derived from a potential vorticity inversion technique. Firstly, the validity period of the tangent linear model for this idealized baroclinic wave case is discussed, considering different initial moisture distributions and a dry condition. Secondly, the 48-h forecast surface pressure center and the vertical component of the relative vorticity of the cyclone are selected as the response functions for adjoint computation in a dry and moist environment, respectively. The preliminary results show that the validity of the tangent linear assumption for this idealized baroclinic wave case can extend to 48 h with intense moist convection, and the validity period can last even longer in the dry adjoint integration. Adjoint sensitivity analysis indicates that the rapid development of the idealized baroclinic wave is sensitive to the initial wind and temperature perturbations around the steering level in the upstream. Moreover, the moist adjoint sensitivity can capture a secondary high sensitivity center in the upper troposphere, which cannot be depicted in the dry adjoint run.

  6. A higher-order tangent linear parabolic-equation solution of three-dimensional sound propagation.

    PubMed

    Lin, Ying-Tsong

    2013-08-01

    A higher-order square-root operator splitting algorithm is employed to derive a tangent linear solution for the three-dimensional parabolic wave equation due to small variations of the sound speed in the medium. The solution shown in this paper unifies other solutions obtained from less accurate approximations. Examples of three-dimensional acoustic ducts are presented to demonstrate the accuracy of the solution. Future work on the applications of associated adjoint models for acoustic inversions is proposed and discussed.

  7. An asynchronous metamodel-assisted memetic algorithm for CFD-based shape optimization

    NASA Astrophysics Data System (ADS)

    Kontoleontos, Evgenia A.; Asouti, Varvara G.; Giannakoglou, Kyriakos C.

    2012-02-01

    This article presents an asynchronous metamodel-assisted memetic algorithm for the solution of CFD-based optimization problems. This algorithm is appropriate for use on multiprocessor platforms and may solve computationally expensive optimization problems in reduced wall-clock time, compared to conventional evolutionary or memetic algorithms. It is, in fact, a hybridization of non-generation-based (asynchronous) evolutionary algorithms, assisted by surrogate evaluation models, a local search method and the Lamarckian learning process. For the objective function gradient computation, in CFD applications, the adjoint method is used. Issues concerning the 'smart' implementation of local search in multi-objective problems are discussed. In this respect, an algorithmic scheme for reducing the number of calls to the adjoint equations to just one, irrespective of the number of objectives, is proposed. The algorithm is applied to the CFD-based shape optimization of the tubes of a heat exchanger and of a turbomachinery cascade.

  8. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  9. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  10. Adjoint methods for aerodynamic wing design

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1993-01-01

    A model inverse design problem is used to investigate the effect of flow discontinuities on the optimization process. The optimization involves finding the cross-sectional area distribution of a duct that produces velocities that closely match a targeted velocity distribution. Quasi-one-dimensional flow theory is used, and the target is chosen to have a shock wave in its distribution. The objective function which quantifies the difference between the targeted and calculated velocity distributions may become non-smooth due to the interaction between the shock and the discretization of the flowfield. This paper offers two techniques to resolve the resulting problems for the optimization algorithms. The first, shock-fitting, involves careful integration of the objective function through the shock wave. The second, coordinate straining with shock penalty, uses a coordinate transformation to align the calculated shock with the target and then adds a penalty proportional to the square of the distance between the shocks. The techniques are tested using several popular sensitivity and optimization methods, including finite-differences, and direct and adjoint discrete sensitivity methods. Two optimization strategies, Gauss-Newton and sequential quadratic programming (SQP), are used to drive the objective function to a minimum.

  11. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    SciTech Connect

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.

  12. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  13. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  14. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  15. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2014-10-01

    The tangent linear and adjoint model (TAM) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristics vectors. TAM is also required by the 4-D-VAR algorithm which is one of the major method in Data Assimilation. This paper describes the development and the validation of the Tangent linear and Adjoint Model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed and several applications are also presented.

  16. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2015-04-01

    Tangent linear and adjoint models (TAMs) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristic vectors. A TAM is also required by the 4D-Var algorithm, which is one of the major methods in data assimilation. This paper describes the development and the validation of the tangent linear and adjoint model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed, and several applications are also presented.

  17. A balanced decomposition algorithm for parallel solutions of very large sparse systems

    SciTech Connect

    Zecevic, A.I.; Siljak, D.D.

    1995-12-01

    In this paper we present an algorithm for balanced bordered block diagonal (BBD) decompositions of very large symmetric positive definite or diagonally dominant sparse matrices. The algorithm represents a generalization of the method described, and is primarily aimed at parallel solutions of very large sparse systems (> 20,000 equations). A variety of experimental results are provided to illustrate the performance of the algorithm and demonstrate its potential for computing on massively parallel architectures.

  18. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-03-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space-time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge-Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that its

  19. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    SciTech Connect

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-03-15

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  20. Neural Networks Art: Solving Problems with Multiple Solutions and New Teaching Algorithm

    PubMed Central

    Dmitrienko, V. D; Zakovorotnyi, A. Yu.; Leonov, S. Yu.; Khavina, I. P

    2014-01-01

    A new discrete neural networks adaptive resonance theory (ART), which allows solving problems with multiple solutions, is developed. New algorithms neural networks teaching ART to prevent degradation and reproduction classes at training noisy input data is developed. Proposed learning algorithms discrete ART networks, allowing obtaining different classification methods of input. PMID:25246988

  1. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  2. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

    PubMed Central

    Yurtkuran, Alkın

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  3. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

  4. Double-difference adjoint seismic tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Yanhua O.; Simons, Frederik J.; Tromp, Jeroen

    2016-09-01

    We introduce a `double-difference' method for the inversion for seismic wave speed structure based on adjoint tomography. Differences between seismic observations and model predictions at individual stations may arise from factors other than structural heterogeneity, such as errors in the assumed source-time function, inaccurate timings and systematic uncertainties. To alleviate the corresponding non-uniqueness in the inverse problem, we construct differential measurements between stations, thereby reducing the influence of the source signature and systematic errors. We minimize the discrepancy between observations and simulations in terms of the differential measurements made on station pairs. We show how to implement the double-difference concept in adjoint tomography, both theoretically and practically. We compare the sensitivities of absolute and differential measurements. The former provide absolute information on structure along the ray paths between stations and sources, whereas the latter explain relative (and thus higher resolution) structural variations in areas close to the stations. Whereas in conventional tomography a measurement made on a single earthquake-station pair provides very limited structural information, in double-difference tomography one earthquake can actually resolve significant details of the structure. The double-difference methodology can be incorporated into the usual adjoint tomography workflow by simply pairing up all conventional measurements; the computational cost of the necessary adjoint simulations is largely unaffected. Rather than adding to the computational burden, the inversion of double-difference measurements merely modifies the construction of the adjoint sources for data assimilation.

  5. Double-difference adjoint seismic tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Yanhua O.; Simons, Frederik J.; Tromp, Jeroen

    2016-06-01

    We introduce a `double-difference' method for the inversion for seismic wavespeed structure based on adjoint tomography. Differences between seismic observations and model predictions at individual stations may arise from factors other than structural heterogeneity, such as errors in the assumed source-time function, inaccurate timings, and systematic uncertainties. To alleviate the corresponding nonuniqueness in the inverse problem, we construct differential measurements between stations, thereby reducing the influence of the source signature and systematic errors. We minimize the discrepancy between observations and simulations in terms of the differential measurements made on station pairs. We show how to implement the double-difference concept in adjoint tomography, both theoretically and in practice. We compare the sensitivities of absolute and differential measurements. The former provide absolute information on structure along the ray paths between stations and sources, whereas the latter explain relative (and thus higher-resolution) structural variations in areas close to the stations. Whereas in conventional tomography a measurement made on a single earthquake-station pair provides very limited structural information, in double-difference tomography one earthquake can actually resolve significant details of the structure. The double-difference methodology can be incorporated into the usual adjoint tomography workflow by simply pairing up all conventional measurements; the computational cost of the necessary adjoint simulations is largely unaffected. Rather than adding to the computational burden, the inversion of double-difference measurements merely modifies the construction of the adjoint sources for data assimilation.

  6. Algorithmic solution of arithmetic problems and operands-answer associations in long-term memory.

    PubMed

    Thevenot, C; Barrouillet, P; Fayol, M

    2001-05-01

    Many developmental models of arithmetic problem solving assume that any algorithmic solution of a given problem results in an association of the two operands and the answer in memory (Logan & Klapp, 1991; Siegler, 1996). In this experiment, adults had to perform either an operation or a comparison on the same pairs of two-digit numbers and then a recognition task. It is shown that unlike comparisons, the algorithmic solution of operations impairs the recognition of operands in adults. Thus, the postulate of a necessary and automatic storage of operands-answer associations in memory when young children solve additions by algorithmic strategies needs to be qualified. PMID:11394064

  7. Adjoint-based optimization for understanding and suppressing jet noise

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan B.

    2011-08-01

    Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. This paper, which follows from a presentation at the 2010 IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction, reviews recent and ongoing efforts by the author and co-workers. It provides a new formulation of the basic approach and demonstrates the approach on a series of model flows, culminating with a preliminary result for a turbulent jet.

  8. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local

  9. An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents

    ERIC Educational Resources Information Center

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2013-01-01

    Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…

  10. Adjoint variational methods in nonconservative stability problems.

    NASA Technical Reports Server (NTRS)

    Prasad, S. N.; Herrmann, G.

    1972-01-01

    A general nonself-adjoint eigenvalue problem is examined and it is shown that the commonly employed approximate methods, such as the Galerkin procedure, the method of weighted residuals and the least square technique lack variational descriptions. When used in their previously known forms they do not yield stationary eigenvalues and eigenfunctions. With the help of an adjoint system, however, several analogous variational descriptions may be developed and it is shown in the present study that by properly restating the method of least squares, stationary eigenvalues may be obtained. Several properties of the adjoint eigenvalue problem, known only for a restricted group, are shown to exist for the more general class selected for study.

  11. Nonlinear self-adjointness through differential substitutions

    NASA Astrophysics Data System (ADS)

    Gandarias, M. L.

    2014-10-01

    It is known (Ibragimov, 2011; Galiakberova and Ibragimov, 2013) [14,18] that the property of nonlinear self-adjointness allows to associate conservation laws of the equations under study, with their symmetries. In this paper we show that, even when the equation is nonlinearly self-adjoint with a non differential substitution, finding the explicit form of the differential substitution can provide new conservation laws associated to its symmetries. By using the general theorem on conservation laws (Ibragimov, 2007) [11] and the property of nonlinear self-adjointness we find some new conservation laws for the modified Harry-Dym equation. By using a differential substitution we construct a conservation law for the Harry-Dym equation, which has not been derived before using Ibragimov method.

  12. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    NASA Astrophysics Data System (ADS)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  13. ADGEN: ADjoint GENerator for computer models

    SciTech Connect

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  14. Comparative study of fusion algorithms and implementation of new efficient solution

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Snoussi, Hichem; Siala, Mohamed; Abdelkefi, Fatma

    2014-05-01

    High Dynamic Range (HDR) imaging has been the subject of significant researches over the past years, the goal of acquiring the best cinema-quality HDR images of fast-moving scenes using an efficient merging algorithm has not yet been achieved. In fact, through the years, many efficient algorithms have been implemented and developed. However, they are not yet efficient since they don't treat all the situations and they have not enough speed to ensure fast HDR image reconstitution. In this paper, we will present a full comparative analyze and study of the available fusion algorithms. Also, we will implement our personal algorithm which can be more optimized and faster than the existed ones. We will also present our investigated algorithm that has the advantage to be more optimized than the existing ones. This merging algorithm is related to our hardware solution allowing us to obtain four pictures with different exposures.

  15. FAST TRACK COMMUNICATION Quasi self-adjoint nonlinear wave equations

    NASA Astrophysics Data System (ADS)

    Ibragimov, N. H.; Torrisi, M.; Tracinà, R.

    2010-11-01

    In this paper we generalize the classification of self-adjoint second-order linear partial differential equation to a family of nonlinear wave equations with two independent variables. We find a class of quasi self-adjoint nonlinear equations which includes the self-adjoint linear equations as a particular case. The property of a differential equation to be quasi self-adjoint is important, e.g. for constructing conservation laws associated with symmetries of the differential equation.

  16. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  17. Time dependent adjoint-based optimization for coupled fluid-structure problems

    NASA Astrophysics Data System (ADS)

    Mishra, Asitav; Mani, Karthik; Mavriplis, Dimitri; Sitaraman, Jay

    2015-07-01

    A formulation for sensitivity analysis of fully coupled time-dependent aeroelastic problems is given in this paper. Both forward sensitivity and adjoint sensitivity formulations are derived that correspond to analogues of the fully coupled non-linear aeroelastic analysis problem. Both sensitivity analysis formulations make use of the same iterative disciplinary solution techniques used for analysis, and make use of an analogous coupling strategy. The information passed between fluid and structural solvers is dimensionally equivalent in all cases, enabling the use of the same data structures for analysis, forward and adjoint problems. The fully coupled adjoint formulation is then used to perform rotor blade design optimization for a four bladed HART2 rotor in hover conditions started impulsively from rest. The effect of time step size and mesh resolution on optimization results is investigated.

  18. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  19. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2016-06-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7 ) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here

  20. An Evolutionary Algorithm for Improved Diversity in DSL Spectrum Balancing Solutions

    NASA Astrophysics Data System (ADS)

    Bezerra, Johelden; Klautau, Aldebaro; Monteiro, Marcio; Pelaes, Evaldo; Medeiros, Eduardo; Dortschy, Boris

    2010-12-01

    There are many spectrum balancing algorithms to combat the deleterious impact of crosstalk interference in digital subscriber lines (DSL) networks. These algorithms aim to find a unique operating point by optimizing the power spectral densities (PSDs) of the modems. Typically, the figure of merit of this optimization is the bit rate, power consumption or margin. This work poses and solves a different problem: instead of providing the solution for one specific operation point, it finds a set of operating points, each one corresponding to a distinct matrix with PSDs. This solution is useful for planning DSL deployment, for example, helping operators to conveniently evaluate their network capabilities and better plan their usage. The proposed method is based on a multiobjective formulation and implemented as an evolutionary genetic algorithm. Simulation results show that this algorithm achieves a better diversity among the operating points with lower computational cost when compared to an alternative approach.

  1. A numerical solution algorithm and its application to studies of pulsed light fields propagation

    NASA Astrophysics Data System (ADS)

    Banakh, V. A.; Gerasimova, L. O.; Smalikho, I. N.; Falits, A. V.

    2016-08-01

    A new method for studies of pulsed laser beams propagation in a turbulent atmosphere was proposed. The algorithm of numerical simulation is based on the solution of wave parabolic equation for complex spectral amplitude of wave field using method of splitting into physical factors. Examples of the use of the algorithm in the case the propagation pulsed Laguerre-Gaussian beams of femtosecond duration in the turbulence atmosphere has been shown.

  2. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  3. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1971-01-01

    Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  4. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations.

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1973-01-01

    Tridiagonal linear systems of equations can be solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computation on computers of the Illiac IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log(sub-2) N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  5. Supersonic biplane design via adjoint method

    NASA Astrophysics Data System (ADS)

    Hu, Rui

    In developing the next generation supersonic transport airplane, two major challenges must be resolved. The fuel efficiency must be significantly improved, and the sonic boom propagating to the ground must be dramatically reduced. Both of these objectives can be achieved by reducing the shockwaves formed in supersonic flight. The Busemann biplane is famous for using favorable shockwave interaction to achieve nearly shock-free supersonic flight at its design Mach number. Its performance at off-design Mach numbers, however, can be very poor. This dissertation studies the performance of supersonic biplane airfoils at design and off-design conditions. The choked flow and flow-hysteresis phenomena of these biplanes are studied. These effects are due to finite thickness of the airfoils and non-uniqueness of the solution to the Euler equations, creating over an order of magnitude more wave drag than that predicted by supersonic thin airfoil theory. As a result, the off-design performance is the major barrier to the practical use of supersonic biplanes. The main contribution of this work is to drastically improve the off-design performance of supersonic biplanes by using an adjoint based aerodynamic optimization technique. The Busemann biplane is used as the baseline design, and its shape is altered to achieve optimal wave drags in series of Mach numbers ranging from 1.1 to 1.7, during both acceleration and deceleration conditions. The optimized biplane airfoils dramatically reduces the effects of the choked flow and flow-hysteresis phenomena, while maintaining a certain degree of favorable shockwave interaction effects at the design Mach number. Compared to a diamond shaped single airfoil of the same total thickness, the wave drag of our optimized biplane is lower at almost all Mach numbers, and is significantly lower at the design Mach number. In addition, by performing a Navier-Stokes solution for the optimized airfoil, it is verified that the optimized biplane improves

  6. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  7. Coupling of Monte Carlo adjoint leakages with three-dimensional discrete ordinates forward fluences

    SciTech Connect

    Slater, C.O.; Lillie, R.A.; Johnson, J.O.; Simpson, D.B.

    1998-04-01

    A computer code, DRC3, has been developed for coupling Monte Carlo adjoint leakages with three-dimensional discrete ordinates forward fluences in order to solve a special category of geometrically-complex deep penetration shielding problems. The code extends the capabilities of earlier methods that coupled Monte Carlo adjoint leakages with two-dimensional discrete ordinates forward fluences. The problems involve the calculation of fluences and responses in a perturbation to an otherwise simple two- or three-dimensional radiation field. In general, the perturbation complicates the geometry such that it cannot be modeled exactly using any of the discrete ordinates geometry options and thus a direct discrete ordinates solution is not possible. Also, the calculation of radiation transport from the source to the perturbation involves deep penetration. One approach to solving such problems is to perform the calculations in three steps: (1) a forward discrete ordinates calculation, (2) a localized adjoint Monte Carlo calculation, and (3) a coupling of forward fluences from the first calculation with adjoint leakages from the second calculation to obtain the response of interest (fluence, dose, etc.). A description of this approach is presented along with results from test problems used to verify the method. The test problems that were selected could also be solved directly by the discrete ordinates method. The good agreement between the DRC3 results and the direct-solution results verify the correctness of DRC3.

  8. Development of CO2 inversion system based on the adjoint of the global coupled transport model

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry; Maksyutov, Shamil; Chevallier, Frederic; Kaminski, Thomas; Ganshin, Alexander; Blessing, Simon

    2014-05-01

    We present the development of an inverse modeling system employing an adjoint of the global coupled transport model consisting of the National Institute for Environmental Studies (NIES) Eulerian transport model (TM) and the Lagrangian plume diffusion model (LPDM) FLEXPART. NIES TM is a three-dimensional atmospheric transport model, which solves the continuity equation for a number of atmospheric tracers on a grid spanning the entire globe. Spatial discretization is based on a reduced latitude-longitude grid and a hybrid sigma-isentropic coordinate in the vertical. NIES TM uses a horizontal resolution of 2.5°×2.5°. However, to resolve synoptic-scale tracer distributions and to have the ability to optimize fluxes at resolutions of 0.5° and higher we coupled NIES TM with the Lagrangian model FLEXPART. The Lagrangian component of the forward and adjoint models uses precalculated responses of the observed concentration to the surface fluxes and 3-D concentrations field simulated with the FLEXPART model. NIES TM and FLEXPART are driven by JRA-25/JCDAS reanalysis dataset. Construction of the adjoint of the Lagrangian part is less complicated, as LPDMs calculate the sensitivity of measurements to the surrounding emissions field by tracking a large number of "particles" backwards in time. Developing of the adjoint to Eulerian part was performed with automatic differentiation tool the Transformation of Algorithms in Fortran (TAF) software (http://www.FastOpt.com). This method leads to the discrete adjoint of NIES TM. The main advantage of the discrete adjoint is that the resulting gradients of the numerical cost function are exact, even for nonlinear algorithms. The overall advantages of our method are that: 1. No code modification of Lagrangian model is required, making it applicable to combination of global NIES TM and any Lagrangian model; 2. Once run, the Lagrangian output can be applied to any chemically neutral gas; 3. High-resolution results can be obtained over

  9. Complex generalized minimal residual algorithm for iterative solution of quantum-mechanical reactive scattering equations

    NASA Astrophysics Data System (ADS)

    Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.

    1992-12-01

    Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.

  10. Complex generalized minimal residual algorithm for iterative solution of quantum-mechanical reactive scattering equations

    NASA Technical Reports Server (NTRS)

    Chatfield, David C.; Reeves, Melissa S.; Truhlar, Donald G.; Duneczky, Csilla; Schwenke, David W.

    1992-01-01

    Complex dense matrices corresponding to the D + H2 and O + HD reactions were solved using a complex generalized minimal residual (GMRes) algorithm described by Saad and Schultz (1986) and Saad (1990). To provide a test case with a different structure, the H + H2 system was also considered. It is shown that the computational effort for solutions with the GMRes algorithm depends on the dimension of the linear system, the total energy of the scattering problem, and the accuracy criterion. In several cases with dimensions in the range 1110-5632, the GMRes algorithm outperformed the LAPACK direct solver, with speedups for the linear equation solution as large as a factor of 23.

  11. A general algorithm for the solution of Kepler's equation for elliptic orbits

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1979-01-01

    An efficient algorithm is presented for the solution of Kepler's equation f(E)=E-M-e sin E=0, where e is the eccentricity, M the mean anomaly and E the eccentric anomaly. This algorithm is based on simple initial approximations that are cubics in M, and an iterative scheme that is a slight generalization of the Newton-Raphson method. Extensive testing of this algorithm has been performed on the UNIVAC 1108 computer. Solutions for 20,000 pairs of values of e and M show that for single precision, 42.0% of the cases require one iteration, 57.8% two and 0.2% three. For double precision one additional iteration is required.

  12. The adjoint sensitivity method of global electromagnetic induction for CHAMP magnetic data

    NASA Astrophysics Data System (ADS)

    Martinec, Zdeněk; Velímský, Jakub

    2009-12-01

    An existing time-domain spectral-finite element approach for the forward modelling of electromagnetic induction vector data as measured by the CHAMP satellite is, in this paper, supplemented by a new method of computing the sensitivity of the CHAMP electromagnetic induction data to the Earth's mantle electrical conductivity, which we term the adjoint sensitivity method. The forward and adjoint initial boundary-value problems, both solved in the time domain, are identical, except for the specification of prescribed boundary conditions. The respective boundary-value data at the satellite's altitude are the X magnetic component measured by the CHAMP vector magnetometer along the satellite track for the forward method and the difference between the measured and predicted Z magnetic component for the adjoint method. The squares of these differences summed up over all CHAMP tracks determine the misfit. The sensitivities of the CHAMP data, that is the partial derivatives of the misfit with respect to mantle conductivity parameters, are then obtained by the scalar product of the forward and adjoint solutions, multiplied by the gradient of the conductivity and integrated over all CHAMP tracks. Such exactly determined sensitivities are checked against numerical differentiation of the misfit, and good agreement is obtained. The attractiveness of the adjoint method lies in the fact that the adjoint sensitivities are calculated for the price of only an additional forward calculation, regardless of the number of conductivity parameters. However, since the adjoint solution proceeds backwards in time, the forward solution must be stored at each time step, leading to memory requirements that are linear with respect to the number of steps undertaken. Having determined the sensitivities, we apply the conjugate gradient method to infer 1-D and 2-D conductivity structures of the Earth based on the CHAMP residual time series (after the subtraction of static field and secular variations

  13. Utility of a finite element solution algorithm for initial-value problems

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Soliman, M. O.

    1979-01-01

    The Galerkin criterion within a finite element Weighted Residuals formulation is employed to establish an implicit solution algorithm for an initial-value partial differential equation. Numerical solutions of a transient parabolic and a hyperbolic equation, obtained using linear, quadratic and two cubic finite element basis functions, are employed to quantize accuracy and confirm and refine theoretical convergence rate estimates. The linear basis algorithm for the hyperbolic equation displays excellent accuracy on a coarse computational grid and a high-order convergence rate with discretization refinement. Good accuracy and a strong convergence rate in surface flux are determined for a nonhomogeneous Neumann boundary constraint applied to a parabolic equation. The results amply demonstrate the impact of the nondiagonal finite element initial-value matrix structure on solution accuracy and/or convergence rate.

  14. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  15. A deterministic annealing algorithm for approximating a solution of the min-bisection problem.

    PubMed

    Dang, Chuangyin; Ma, Wei; Liang, Jiye

    2009-01-01

    The min-bisection problem is an NP-hard combinatorial optimization problem. In this paper an equivalent linearly constrained continuous optimization problem is formulated and an algorithm is proposed for approximating its solution. The algorithm is derived from the introduction of a logarithmic-cosine barrier function, where the barrier parameter behaves as temperature in an annealing procedure and decreases from a sufficiently large positive number to zero. The algorithm searches for a better solution in a feasible descent direction, which has a desired property that lower and upper bounds are always satisfied automatically if the step length is a number between zero and one. We prove that the algorithm converges to at least a local minimum point of the problem if a local minimum point of the barrier problem is generated for a sequence of descending values of the barrier parameter with a limit of zero. Numerical results show that the algorithm is much more efficient than two of the best existing heuristic methods for the min-bisection problem, Kernighan-Lin method with multiple starting points (MSKL) and multilevel graph partitioning scheme (MLGP).

  16. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Hoist, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem-both single and two-objective variations is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  17. Aerodynamic Shape Optimization using an Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    A method for aerodynamic shape optimization based on an evolutionary algorithm approach is presented and demonstrated. Results are presented for a number of model problems to access the effect of algorithm parameters on convergence efficiency and reliability. A transonic viscous airfoil optimization problem, both single and two-objective variations, is used as the basis for a preliminary comparison with an adjoint-gradient optimizer. The evolutionary algorithm is coupled with a transonic full potential flow solver and is used to optimize the inviscid flow about transonic wings including multi-objective and multi-discipline solutions that lead to the generation of pareto fronts. The results indicate that the evolutionary algorithm approach is easy to implement, flexible in application and extremely reliable.

  18. Quasi-static solution algorithms for kinematically/materially nonlinear thermomechanical problems

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Pai, S. S.

    1984-01-01

    This paper develops an algorithmic solution strategy which allows the handling of positive/indefinite stiffness characteristics associated with the pre- and post-buckling of structures subject to complex thermomechanical loading fields. The flexibility of the procedure is such that it can be applied to both finite difference and element-type simulations. Due to the generality of the algorithmic approach developed, both kinematic and thermal/mechanical type material nonlinearity including inelastic effects can be treated. This includes the possibility of handling completely general thermomechanical boundary conditions. To demonstrate the scheme, the results of several benchmark problems is presented.

  19. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution.

    PubMed

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-23

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  20. Automating adjoint wave-equation travel-time tomography using scientific workflow

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofeng; Chen, Po; Pullammanappallil, Satish

    2013-10-01

    Recent advances in commodity high-performance computing technology have dramatically reduced the computational cost for solving the seismic wave equation in complex earth structure models. As a consequence, wave-equation-based seismic tomography techniques are being actively developed and gradually adopted in routine subsurface seismic imaging practices. Wave-equation travel-time tomography is a seismic tomography technique that inverts cross-correlation travel-time misfits using full-wave Fréchet kernels computed by solving the wave equation. This technique can be implemented very efficiently using the adjoint method, in which the misfits are back-propagated from the receivers (i.e., seismometers) to produce the adjoint wave-field and the interaction between the adjoint wave-field and the forward wave-field from the seismic source gives the gradient of the objective function. Once the gradient is available, a gradient-based optimization algorithm can then be adopted to produce an optimal earth structure model that minimizes the objective function. This methodology is conceptually straightforward, but its implementation in practical situations is highly complex, error-prone and computationally demanding. In this study, we demonstrate the feasibility of automating wave-equation travel-time tomography based on the adjoint method using Kepler, an open-source software package for designing, managing and executing scientific workflows. The workflow technology allows us to abstract away much of the complexity involved in the implementation in a manner that is both robust and scalable. Our automated adjoint wave-equation travel-time tomography package has been successfully applied on a real active-source seismic dataset.

  1. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  2. A finite element solution algorithm for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1974-01-01

    A finite element solution algorithm is established for the two-dimensional Navier-Stokes equations governing the steady-state kinematics and thermodynamics of a variable viscosity, compressible multiple-species fluid. For an incompressible fluid, the motion may be transient as well. The primitive dependent variables are replaced by a vorticity-streamfunction description valid in domains spanned by rectangular, cylindrical and spherical coordinate systems. Use of derived variables provides a uniformly elliptic partial differential equation description for the Navier-Stokes system, and for which the finite element algorithm is established. Explicit non-linearity is accepted by the theory, since no psuedo-variational principles are employed, and there is no requirement for either computational mesh or solution domain closure regularity. Boundary condition constraints on the normal flux and tangential distribution of all computational variables, as well as velocity, are routinely piecewise enforceable on domain closure segments arbitrarily oriented with respect to a global reference frame.

  3. Solution algorithms for nonlinear transient heat conduction analysis employing element-by-element iterative strategies

    NASA Technical Reports Server (NTRS)

    Winget, J. M.; Hughes, T. J. R.

    1985-01-01

    The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.

  4. Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution.

    SciTech Connect

    Hanks, Bradley Wright.; Robinson, Allen C

    2014-01-01

    Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.

  5. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  6. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  7. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

    SciTech Connect

    Ramkumar, B. ); Kale, L.V. )

    1992-02-01

    When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

  8. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  9. Elementary operators on self-adjoint operators

    NASA Astrophysics Data System (ADS)

    Molnar, Lajos; Semrl, Peter

    2007-03-01

    Let H be a Hilbert space and let and be standard *-operator algebras on H. Denote by and the set of all self-adjoint operators in and , respectively. Assume that and are surjective maps such that M(AM*(B)A)=M(A)BM(A) and M*(BM(A)B)=M*(B)AM*(B) for every pair , . Then there exist an invertible bounded linear or conjugate-linear operator and a constant c[set membership, variant]{-1,1} such that M(A)=cTAT*, , and M*(B)=cT*BT, .

  10. Hybrid solution of stochastic optimal control problems using Gauss pseudospectral method and generalized polynomial chaos algorithms

    NASA Astrophysics Data System (ADS)

    Cottrill, Gerald C.

    A hybrid numerical algorithm combining the Gauss Pseudospectral Method (GPM) with a Generalized Polynomial Chaos (gPC) method to solve nonlinear stochastic optimal control problems with constraint uncertainties is presented. TheGPM and gPC have been shown to be spectrally accurate numerical methods for solving deterministic optimal control problems and stochastic differential equations, respectively. The gPC uses collocation nodes to sample the random space, which are then inserted into the differential equations and solved by applying standard differential equation methods. The resulting set of deterministic solutions is used to characterize the distribution of the solution by constructing a polynomial representation of the output as a function of uncertain parameters. Optimal control problems are especially challenging to solve since they often include path constraints, bounded controls, boundary conditions, and require solutions that minimize a cost functional. Adding random parameters can make these problems even more challenging. The hybrid algorithm presented in this dissertation is the first time the GPM and gPC algorithms have been combined to solve optimal control problems with random parameters. Using the GPM in the gPC construct provides minimum cost deterministic solutions used in stochastic computations that meet path, control, and boundary constraints, thus extending current gPC methods to be applicable to stochastic optimal control problems. The hybrid GPM-gPC algorithm was applied to two concept demonstration problems: a nonlinear optimal control problem with multiplicative uncertain elements and a trajectory optimization problem simulating an aircraft flying through a threat field where exact locations of the threats are unknown. The results show that the expected value, variance, and covariance statistics of the polynomial output function approximations of the state, control, cost, and terminal time variables agree with Monte-Carlo simulation

  11. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  12. A family of Eulerian-Lagrangian localized adjoint methods for multi-dimensional advection-reaction equations

    SciTech Connect

    Wang, H.; Man, S.; Ewing, R.E.; Qin, G.; Lyons, S.L.; Al-Lawatia, M.

    1999-06-10

    Many difficult problems arise in the numerical simulation of fluid flow processes within porous media in petroleum reservoir simulation and in subsurface contaminant transport and remediation. The authors develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of the initial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains. Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derived schemes, which are full mass conservative, naturally incorporate inflow boundary conditions into their formulations and do not need any artificial outflow boundary conditions. Moreover, they have regularly structured, well-conditioned, symmetric, and positive-definite coefficient matrices, which can be efficiently solved by the conjugate gradient method in an optimal order number of iterations without any preconditioning needed. Numerical results are presented to compare the performance of the ELLAM schemes with many well studied and widely used methods, including the upwind finite difference method, the Galerkin and the Petrov-Galerkin finite element methods with backward-Euler or Crank-Nicolson temporal discretization, the streamline diffusion finite element methods, the monotonic upstream-centered scheme for conservation laws (MUSCL), and the Minmod scheme.

  13. Supersymmetric descendants of self-adjointly extended quantum mechanical Hamiltonians

    SciTech Connect

    Al-Hashimi, M.H.; Salman, M.; Shalaby, A.; Wiese, U.-J.

    2013-10-15

    We consider the descendants of self-adjointly extended Hamiltonians in supersymmetric quantum mechanics on a half-line, on an interval, and on a punctured line or interval. While there is a 4-parameter family of self-adjointly extended Hamiltonians on a punctured line, only a 3-parameter sub-family has supersymmetric descendants that are themselves self-adjoint. We also address the self-adjointness of an operator related to the supercharge, and point out that only a sub-class of its most general self-adjoint extensions is physical. Besides a general characterization of self-adjoint extensions and their supersymmetric descendants, we explicitly consider concrete examples, including a particle in a box with general boundary conditions, with and without an additional point interaction. We also discuss bulk-boundary resonances and their manifestation in the supersymmetric descendant. -- Highlights: •Self-adjoint extension theory and contact interactions. •Application of self-adjoint extensions to supersymmetry. •Contact interactions in finite volume with Robin boundary condition.

  14. The compressible adjoint equations in geodynamics: equations and numerical assessment

    NASA Astrophysics Data System (ADS)

    Ghelichkhan, Siavash; Bunge, Hans-Peter

    2016-04-01

    The adjoint method is a powerful means to obtain gradient information in a mantle convection model relative to past flow structure. While the adjoint equations in geodynamics have been derived for the conservation equations of mantle flow in their incompressible form, the applicability of this approximation to Earth is limited, because density increases by almost a factor of two from the surface to the Core Mantle Boundary. Here we introduce the compressible adjoint equations for the conservation equations in the anelastic-liquid approximation. Our derivation applies an operator formulation in Hilbert spaces, to connect to recent work in seismology (Fichtner et al (2006)) and geodynamics (Horbach et al (2014)), where the approach was used to derive the adjoint equations for the wave equation and incompressible mantle flow. We present numerical tests of the newly derived equations based on twin experiments, focusing on three simulations. A first, termed Compressible, assumes the compressible forward and adjoint equations, and represents the consistent means of including compressibility effects. A second, termed Mixed, applies the compressible forward equation, but ignores compressibility effects in the adjoint equations, where the incompressible equations are used instead. A third simulation, termed Incompressible, neglects compressibility effects entirely in the forward and adjoint equations relative to the reference twin. The compressible and mixed formulations successfully restore earlier mantle flow structure, while the incompressible formulation yields noticeable artifacts. Our results suggest the use of a compressible formulation, when applying the adjoint method to seismically derived mantle heterogeneity structure.

  15. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  16. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.; Brewe, David E.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  17. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, C. M.; Brewe, D. E.

    1989-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  18. A Proposed Implementation of Tarjan's Algorithm for Scheduling the Solution Sequence of Systems of Federated Models

    SciTech Connect

    McNunn, Gabriel S; Bryden, Kenneth M

    2013-01-01

    Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the “what-if” kinds of questions that arise in engineering design.

  19. Tsunami waveform inversion by adjoint methods

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Miranda, Pedro M. A.

    2001-09-01

    An adjoint method for tsunami waveform inversion is proposed, as an alternative to the technique based on Green's functions of the linear long wave model. The method has the advantage of being able to use the nonlinear shallow water equations, or other appropriate equation sets, and to optimize an initial state given as a linear or nonlinear function of any set of free parameters. This last facility is used to perform explicit optimization of the focal fault parameters, characterizing the initial sea surface displacement of tsunamigenic earthquakes. The proposed methodology is validated with experiments using synthetic data, showing the possibility of recovering all relevant details of a tsunami source from tide gauge observations, providing that the adjoint method is constrained in an appropriate manner. It is found, as in other methods, that the inversion skill of tsunami sources increases with the azimuthal and temporal coverage of assimilated tide gauge stations; furthermore, it is shown that the eigenvalue analysis of the Hessian matrix of the cost function provides a consistent and useful methodology to choose the subset of independent parameters that can be inverted with a given dataset of observations and to evaluate the error of the inversion process. The method is also applied to real tide gauge series, from the tsunami of the February 28, 1969, Gorringe Bank earthquake, suggesting some reasonable changes to the assumed focal parameters of that event. It is suggested that the method proposed may be able to deal with transient tsunami sources such as those generated by submarine landslides.

  20. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  1. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  2. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process.

    PubMed

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  3. Adjoint of the global Eulerian-Lagrangian coupled atmospheric transport model (A-GELCA v1.0): development and validation

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry A.; Maksyutov, Shamil; Yaremchuk, Alexey; Ganshin, Alexander; Kaminski, Thomas; Blessing, Simon; Sasakawa, Motoki; Gomez-Pelaez, Angel J.; Starchenko, Alexander

    2016-02-01

    We present the development of the Adjoint of the Global Eulerian-Lagrangian Coupled Atmospheric (A-GELCA) model that consists of the National Institute for Environmental Studies (NIES) model as an Eulerian three-dimensional transport model (TM), and FLEXPART (FLEXible PARTicle dispersion model) as the Lagrangian Particle Dispersion Model (LPDM). The forward tangent linear and adjoint components of the Eulerian model were constructed directly from the original NIES TM code using an automatic differentiation tool known as TAF (Transformation of Algorithms in Fortran; http://www.FastOpt.com, with additional manual pre- and post-processing aimed at improving transparency and clarity of the code and optimizing the performance of the computing, including MPI (Message Passing Interface). The Lagrangian component did not require any code modification, as LPDMs are self-adjoint and track a significant number of particles backward in time in order to calculate the sensitivity of the observations to the neighboring emission areas. The constructed Eulerian adjoint was coupled with the Lagrangian component at a time boundary in the global domain. The simulations presented in this work were performed using the A-GELCA model in forward and adjoint modes. The forward simulation shows that the coupled model improves reproduction of the seasonal cycle and short-term variability of CO2. Mean bias and standard deviation for five of the six Siberian sites considered decrease roughly by 1 ppm when using the coupled model. The adjoint of the Eulerian model was shown, through several numerical tests, to be very accurate (within machine epsilon with mismatch around to ±6 e-14) compared to direct forward sensitivity calculations. The developed adjoint of the coupled model combines the flux conservation and stability of an Eulerian discrete adjoint formulation with the flexibility, accuracy, and high resolution of a Lagrangian backward trajectory formulation. A-GELCA will be incorporated

  4. Quality analysis of the solution produced by dissection algorithms applied to the traveling salesman problem

    SciTech Connect

    Cesari, G.

    1994-12-31

    The aim of this paper is to analyze experimentally the quality of the solution obtained with dissection algorithms applied to the geometric Traveling Salesman Problem. Starting from Karp`s results. We apply a divide and conquer strategy, first dividing the plane into subregions where we calculate optimal subtours and then merging these subtours to obtain the final tour. The analysis is restricted to problem instances where points are uniformly distributed in the unit square. For relatively small sets of cities we analyze the quality of the solution by calculating the length of the optimal tour and by comparing it with our approximate solution. When the problem instance is too large we perform an asymptotical analysis estimating the length of the optimal tour. We apply the same dissection strategy also to classical heuristics by calculating approximate subtours and by comparing the results with the average quality of the heuristic. Our main result is the estimate of the rate of convergence of the approximate solution to the optimal solution as a function of the number of dissection steps, of the criterion used for the plane division and of the quality of the subtours. We have implemented our programs on MUSIC (MUlti Signal processor system with Intelligent Communication), a Single-Program-Multiple-Data parallel computer with distributed memory developed at the ETH Zurich.

  5. Adjoint equations and analysis of complex systems: Application to virus infection modelling

    NASA Astrophysics Data System (ADS)

    Marchuk, G. I.; Shutyaev, V.; Bocharov, G.

    2005-12-01

    Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.

  6. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  7. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one, when the…

  8. Efficient checkpointing schemes for depletion perturbation solutions on memory-limited architectures

    SciTech Connect

    Stripling, H. F.; Adams, M. L.; Hawkins, W. D.

    2013-07-01

    We describe a methodology for decreasing the memory footprint and machine I/O load associated with the need to access a forward solution during an adjoint solve. Specifically, we are interested in the depletion perturbation equations, where terms in the adjoint Bateman and transport equations depend on the forward flux solution. Checkpointing is the procedure of storing snapshots of the forward solution to disk and using these snapshots to recompute the parts of the forward solution that are necessary for the adjoint solve. For large problems, however, the storage cost of just a few copies of an angular flux vector can exceed the available RAM on the host machine. We propose a methodology that does not checkpoint the angular flux vector; instead, we write and store converged source moments, which are typically of a much lower dimension than the angular flux solution. This reduces the memory footprint and I/O load of the problem, but requires that we perform single sweeps to reconstruct flux vectors on demand. We argue that this trade-off is exactly the kind of algorithm that will scale on advanced, memory-limited architectures. We analyze the cost, in terms of FLOPS and memory footprint, of five checkpointing schemes. We also provide computational results that support the analysis and show that the memory-for-work trade off does improve time to solution. (authors)

  9. A level-set adjoint-state method for crosswell transmission-reflection traveltime tomography

    NASA Astrophysics Data System (ADS)

    Li, Wenbin; Leung, Shingyu; Qian, Jianliang

    2014-10-01

    We propose a level-set adjoint-state method for crosswell traveltime tomography using both first-arrival transmission and reflection traveltime data. Since our entire formulation is based on solving eikonal and advection equations on finite-difference meshes, our traveltime tomography strategy is carried out without computing rays explicitly. We incorporate reflection traveltime data into the formulation so that possible reflectors (slowness interfaces) in the targeted subsurface model can be recovered as well as the slowness distribution itself. Since a reflector may assume a variety of irregular geometries, we propose to use a level-set function to implicitly parametrize the shape of a reflector. Therefore, a mismatch functional is established to minimize the traveltime data misfit with respect to both the slowness distribution and the level-set function, and the minimization is achieved by using a gradient descent method with gradients computed by solving adjoint state equations. To assess uncertainty or reliability of reconstructed slowness models, we introduce a labelling function to characterize first-arrival ray coverage of the computational domain, and this labelling function satisfies an advection equation. We apply fast-sweeping type methods to solve eikonal, adjoint-state and advection equations arising in our formulation. Numerical examples demonstrate that the proposed algorithm is robust to noise in the measurements, and can recover complicated structure even with little information on the reflector.

  10. Efficient solution of liquid state integral equations using the Newton-GMRES algorithm

    NASA Astrophysics Data System (ADS)

    Booth, Michael J.; Schlijper, A. G.; Scales, L. E.; Haymet, A. D. J.

    1999-06-01

    We present examples of the accurate, robust and efficient solution of Ornstein-Zernike type integral equations which describe the structure of both homogeneous and inhomogeneous fluids. In this work we use the Newton-GMRES algorithm as implemented in the public-domain nonlinear Krylov solvers NKSOL [ P. Brown, Y. Saad, SIAM J. Sci. Stat. Comput. 11 (1990) 450] and NITSOL [ M. Pernice, H.F. Walker, SIAM J. Sci. Comput. 19 (1998) 302]. We compare and contrast this method with more traditional approaches in the literature, using Picard iteration (successive-substitution) and hybrid Newton-Raphson and Picard methods, and a recent vector extrapolation method [ H.H.H. Homeier, S. Rast, H. Krienke, Comput. Phys. Commun. 92 (1995) 188]. We find that both the performance and ease of implementation of these nonlinear solvers recommend them for the solution of this class of problem.

  11. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    NASA Technical Reports Server (NTRS)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  12. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  13. a Generalized Albedo Option for Forward and Adjoint Monte Carlo Calculations

    NASA Astrophysics Data System (ADS)

    Gomes, Itacil Chiari

    1991-02-01

    The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems which have ducts and other penetrations is investigated. It is generally accepted that the use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations--however the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study has been done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. The major modifications include an option to save for further use information that would be lost at the albedo event, an option to displace the emergent point during an albedo event, and an option to read spatially -dependent albedo data for both forward and adjoint calculations --which includes the emergent point as a new random variable to be selected during an albedo reflection event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton-albedos is derived. The MORSE/STORM code was developed to perform both forward and adjoint modes of analysis using spatially-dependent albedo data. The results obtained using the MORSE/STORM code package for both forward and adjoint modes were compared with benchmark solutions--excellent agreements along with improved computational efficiencies were achieved demonstrating the full utilization of the albedo option in the MORSE code.

  14. Aerodynamic design optimization by using a continuous adjoint method

    NASA Astrophysics Data System (ADS)

    Luo, JiaQi; Xiong, JunTao; Liu, Feng

    2014-07-01

    This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.

  15. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  16. Algorithmic Construction of Exact Solutions for Neutral Static Perfect Fluid Spheres

    NASA Astrophysics Data System (ADS)

    Hansraj, Sudan; Krupanandan, Daniel

    2013-07-01

    Although it ranks amongst the oldest of problems in classical general relativity, the challenge of finding new exact solutions for spherically symmetric perfect fluid spacetimes is still ongoing because of a paucity of solutions which exhibit the necessary qualitative features compatible with observational evidence. The problem amounts to solving a system of three partial differential equations in four variables, which means that any one of four geometric or dynamical quantities must be specified at the outset and the others should follow by integration. The condition of pressure isotropy yields a differential equation that may be interpreted as second-order in one of the space variables or also as first-order Ricatti type in the other space variable. This second option has been fruitful in allowing us to construct an algorithm to generate a complete solution to the Einstein field equations once a geometric variable is specified ab initio. We then demonstrate the construction of previously unreported solutions and examine these for physical plausibility as candidates to represent real matter. In particular we demand positive definiteness of pressure, density as well as a subluminal sound speed. Additionally, we require the existence of a hypersurface of vanishing pressure to identify a radius for the closed distribution of fluid. Finally, we examine the energy conditions. We exhibit models which display all of these elementary physical requirements.

  17. Optimization of the K-means algorithm for the solution of high dimensional instances

    NASA Astrophysics Data System (ADS)

    Pérez, Joaquín; Pazos, Rodolfo; Olivares, Víctor; Hidalgo, Miguel; Ruiz, Jorge; Martínez, Alicia; Almanza, Nelva; González, Moisés

    2016-06-01

    This paper addresses the problem of clustering instances with a high number of dimensions. In particular, a new heuristic for reducing the complexity of the K-means algorithm is proposed. Traditionally, there are two approaches that deal with the clustering of instances with high dimensionality. The first executes a preprocessing step to remove those attributes of limited importance. The second, called divide and conquer, creates subsets that are clustered separately and later their results are integrated through post-processing. In contrast, this paper proposes a new solution which consists of the reduction of distance calculations from the objects to the centroids at the classification step. This heuristic is derived from the visual observation of the clustering process of K-means, in which it was found that the objects can only migrate to adjacent clusters without crossing distant clusters. Therefore, this heuristic can significantly reduce the number of distance calculations from an object to the centroids of the potential clusters that it may be classified to. To validate the proposed heuristic, it was designed a set of experiments with synthetic and high dimensional instances. One of the most notable results was obtained for an instance of 25,000 objects and 200 dimensions, where its execution time was reduced up to 96.5% and the quality of the solution decreased by only 0.24% when compared to the K-means algorithm.

  18. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  19. Weak self-adjointness and conservation laws for a porous medium equation

    NASA Astrophysics Data System (ADS)

    Gandarias, M. L.

    2012-06-01

    The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006, 2007) [4,7]. In Ibragimov (2007) [6] a general theorem on conservation laws was proved. In Gandarias (2011) [3] we generalized the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. In this paper we find the subclasses of weak self-adjoint porous medium equations. By using the property of weak self-adjointness we construct some conservation laws associated with symmetries of the differential equation.

  20. A finite-difference approximate-factorization algorithm for solution of the unsteady transonic small-disturbance equation

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1992-01-01

    A time-accurate approximate-factorization (AF) algorithm is described for solution of the three-dimensional unsteady transonic small-disturbance equation. The AF algorithm consists of a time-linearization procedure coupled with a subiteration technique. The algorithm is the basis for the Computational Aeroelasticity Program-Transonic Small Disturbance (CAP-TSD) computer code, which was developed for the analysis of unsteady aerodynamics and aeroelasticity of realistic aircraft configurations. The paper describes details on the governing flow equations and boundary conditions, with an emphasis on documenting the finite-difference formulas of the AF algorithm.

  1. A rescaling algorithm for the numerical solution to the porous medium equation in a two-component domain

    NASA Astrophysics Data System (ADS)

    Filo, Ján; Hundertmark-Zaušková, Anna

    2016-10-01

    The aim of this paper is to design a rescaling algorithm for the numerical solution to the system of two porous medium equations defined on two different components of the real line, that are connected by the nonlinear contact condition. The algorithm is based on the self-similarity of solutions on different scales and it presents a space-time adaptable method producing more exact numerical solution in the area of the interface between the components, whereas the number of grid points stays fixed.

  2. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  3. Equilibrium sensitivities of the Greenland ice sheet inferred from the adjoint of the three- dimensional thermo-mechanical model SICOPOLIS

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Bugnion, V.

    2008-12-01

    We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. MacAyeal (1992) introduced adjoints in the context of applying control theory to estimate basal sliding parameters (basal shear stress, basal friction) of an ice stream model which minimize a least-squares model vs. observation misfit. Since then, this method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters. However, no attempt has been made to extend this method to comprehensive ice sheet models. Here, we present a first step toward moving beyond limiting the use of control theory to ice stream models. We have generated an adjoint of the three-dimensional thermo-mechanical ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated using the automatic differentiation (AD) tool TAF. TAF generates exact source code representing the tangent linear and adjoint model of the parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic or "cost function" with respect to the controls, and can be efficiently calculated via the adjoint. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. To gain insight into the adjoint solutions, we explore various cost functions, such as local and domain-integrated ice temperature, total ice volume or the velocity of ice at the margins of the ice sheet. Elements of our control space include initial cold ice temperatures, surface mass balance, as well as parameters such as appear in Glen's flow law, or in the surface degree-day or basal sliding parameterizations. Sensitivity maps provide a comprehensive view, and allow a quantification of where and to which variables the ice sheet model is

  4. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  5. Solution of the optimal plant location and sizing problem using simulated annealing and genetic algorithms

    SciTech Connect

    Rao, R.; Buescher, K.L.; Hanagandi, V.

    1995-12-31

    In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.

  6. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  7. Adjoint Function: Physical Basis of Variational & Perturbation Theory in Transport

    2009-07-27

    Version 00 Dr. J.D. Lewins has now released the following legacy book for free distribution: Importance: The Adjoint Function: The Physical Basis of Variational and Perturbation Theory in Transport and Diffusion Problems, North-Holland Publishing Company - Amsterdam, 582 pages, 1966 Introduction: Continuous Systems and the Variational Principle 1. The Fundamental Variational Principle 2. The Importance Function 3. Adjoint Equations 4. Variational Methods 5. Perturbation and Iterative Methods 6. Non-Linear Theory

  8. Global Adjoint Tomography: First-Generation Model

    NASA Astrophysics Data System (ADS)

    Bozdağ, Ebru; Peter, Daniel; Lefebvre, Matthieu; Komatitsch, Dimitri; Tromp, Jeroen; Hill, Judith; Podhorszki, Norbert; Pugmire, David

    2016-09-01

    We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named `Titan', a computer with 18,688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3D mantle model S362ANI (Kustowski et al. 2008) with 3D crustal model Crust2.0 (Bassin et al. 2000). We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used `crustal corrections'. We used data from 253 earthquakes in the magnitude range 5.8~ ≤ ~Mw~ ≤ ~7.0. For the first 12 iterations, we combined ˜30 s body-wave data with ˜60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ˜17 s body waves with ˜45 s surface waves. We started using 180 min-long seismograms after the 12th iteration and assimilated minor- and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone, and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function

  9. A gridless Euler/Navier-Stokes solution algorithm for complex two-dimensional applications

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1992-01-01

    The development of a gridless computational fluid dynamics (CFD) method for the solution of the two-dimensional Euler and Navier-Stokes equations is described. The method uses only clouds of points and does not require that the points be connected to form a grid as is necessary in conventional CFD algorithms. The gridless CFD approach appears to resolve the problems and inefficiencies encountered with structured or unstructured grid methods. As a result, the method offers the greatest potential for accurately and efficiently solving viscous flows about complex aircraft configurations. The method is described in detail, and calculations are presented for standard Euler and Navier-Stokes cases to assess the accuracy and efficiency of the capability.

  10. Modeling the Pulse Line Ion Accelerator (PLIA): an algorithm for quasi-static field solution.

    SciTech Connect

    Friedman, A; Briggs, R J; Grote, D P; Henestroza, E; Waldron, W L

    2007-06-18

    The Pulse-Line Ion Accelerator (PLIA) is a helical distributed transmission line. A rising pulse applied to the upstream end appears as a moving spatial voltage ramp, on which an ion pulse can be accelerated. This is a promising approach to acceleration and longitudinal compression of an ion beam at high line charge density. In most of the studies carried out to date, using both a simple code for longitudinal beam dynamics and the Warp PIC code, a circuit model for the wave behavior was employed; in Warp, the helix I and V are source terms in elliptic equations for E and B. However, it appears possible to obtain improved fidelity using a ''sheath helix'' model in the quasi-static limit. Here we describe an algorithmic approach that may be used to effect such a solution.

  11. Periodic differential equations with self-adjoint monodromy operator

    NASA Astrophysics Data System (ADS)

    Yudovich, V. I.

    2001-04-01

    A linear differential equation \\dot u=A(t)u with p-periodic (generally speaking, unbounded) operator coefficient in a Euclidean or a Hilbert space \\mathbb H is considered. It is proved under natural constraints that the monodromy operator U_p is self-adjoint and strictly positive if A^*(-t)=A(t) for all t\\in\\mathbb R.It is shown that Hamiltonian systems in the class under consideration are usually unstable and, if they are stable, then the operator U_p reduces to the identity and all solutions are p-periodic.For higher frequencies averaged equations are derived. Remarkably, high-frequency modulation may double the number of critical values.General results are applied to rotational flows with cylindrical components of the velocity a_r=a_z=0, a_\\theta=\\lambda c(t)r^\\beta, \\beta<-1, c(t) is an even p-periodic function, and also to several problems of free gravitational convection of fluids in periodic fields.

  12. Active and passive computed tomography algorithm with a constrained conjugate gradient solution

    SciTech Connect

    Goodman, D.; Jackson, J. A.; Martz, H. E.; Roberson, G. P.

    1998-10-01

    An active and passive computed tomographic technique (A&PCT) has been developed at the Lawrence Livermore National Laboratory (LLNL). The technique uses an external radioactive source and active tomography to map the attenuation within a waste drum as a function of mono-energetic gamma-ray energy. Passive tomography is used to localize and identify specific radioactive waste within the same container. The passive data is corrected for attenuation using the active data and this yields a quantitative assay of drum activity. A&PCT involves the development of a detailed system model that combines the data from the active scans with the geometry of the imaging system. Using the system model, iterative optimization techniques are used to reconstruct the image from the passive data. Requirements for high throughput yield measured emission levels in waste barrels that are too low to apply optimization techniques involving the usual Gaussian statistics. In this situation a Poisson distribution, typically used for cases with low counting statistics, is used to create an effective maximum likelihood estimation function. An optimization algorithm, Constrained Conjugate Gradient (CCG), is used to determine a solution for A&PCT quantitative assay. CCG, which was developed at LLNL, has proven to be an efficient and effective optimization method to solve limited-data problems. A detailed explanation of the algorithms used in developing the model and optimization codes is given.

  13. Comparison of adjoint and nudging methods to initialise ice sheet model basal conditions

    NASA Astrophysics Data System (ADS)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2016-07-01

    Ice flow models are now routinely used to forecast the ice sheets' contribution to 21st century sea-level rise. For such short term simulations, the model response is greatly affected by the initial conditions. Data assimilation algorithms have been developed to invert for the friction of the ice on its bedrock using observed surface velocities. A drawback of these methods is that remaining uncertainties, especially in the bedrock elevation, lead to non-physical ice flux divergence anomalies resulting in undesirable transient effects. In this study, we compare two different assimilation algorithms based on adjoints and nudging to constrain both bedrock friction and elevation. Using synthetic twin experiments with realistic observation errors, we show that the two algorithms lead to similar performances in reconstructing both variables and allow the flux divergence anomalies to be significantly reduced.

  14. Baryogenesis via leptogenesis in adjoint SU(5)

    SciTech Connect

    Blanchet, Steve; Fileviez Perez, Pavel E-mail: fileviez@physics.wisc.edu

    2008-08-15

    The possibility of explaining the baryon asymmetry in the Universe through the leptogenesis mechanism in the context of adjoint SU(5) is investigated. In this model neutrino masses are generated through the type I and type III seesaw mechanisms, and the field responsible for the type III seesaw, called {rho}{sub 3}, generates the B-L asymmetry needed to satisfy the observed value of the baryon asymmetry in the Universe. We find that the CP asymmetry originates only from the vertex correction, since the self-energy contribution is not present. When neutrino masses have a normal hierarchy, successful leptogenesis is possible for 10{sup 11} GeV{approx}

  15. Adjoint estimation of ozone climate penalties

    NASA Astrophysics Data System (ADS)

    Zhao, Shunliu; Pappin, Amanda J.; Morteza Mesbah, S.; Joyce Zhang, J. Y.; MacDonald, Nicole L.; Hakami, Amir

    2013-10-01

    adjoint of a regional chemical transport model is used to calculate location-specific temperature influences (climate penalties) on two policy-relevant ozone metrics: concentrations in polluted regions (>65 ppb) and short-term mortality in Canada and the U.S. Temperature influences through changes in chemical reaction rates, atmospheric moisture content, and biogenic emissions exhibit significant spatial variability. In particular, high-NOx, polluted regions are prominently distinguished by substantial climate penalties (up to 6.2 ppb/K in major urban areas) as a result of large temperature influences through increased biogenic emissions and nonnegative water vapor sensitivities. Temperature influences on ozone mortality, when integrated across the domain, result in 369 excess deaths/K in Canada and the U.S. over a summer season—an impact comparable to a 5% change in anthropogenic NOx emissions. As such, we suggest that NOx control can be also regarded as a climate change adaptation strategy with regard to ozone air quality.

  16. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  17. Second-order p-iterative solution of the Lambert/Gauss problem. [algorithm for efficient orbit determination

    NASA Technical Reports Server (NTRS)

    Boltz, F. W.

    1984-01-01

    An algorithm is presented for efficient p-iterative solution of the Lambert/Gauss orbit-determination problem using second-order Newton iteration. The algorithm is based on a universal transformation of Kepler's time-of-flight equation and approximate inverse solutions of this equation for short-way and long-way flight paths. The approximate solutions provide both good starting values for iteration and simplified computation of the second-order term in the iteration formula. Numerical results are presented which indicate that in many cases of practical significance (except those having collinear position vectors) the algorithm produces at least eight significant digits of accuracy with just two or three steps of iteration.

  18. SOLA-DM: A numerical solution algorithm for transient three-dimensional flows

    SciTech Connect

    Wilson, T.L.; Nichols, B.D.; Hirt, C.W.; Stein, L.R.

    1988-02-01

    SOLA-DM is a three-dimensional time-explicit, finite-difference, Eulerian, fluid-dynamics computer code for solving the time-dependent incompressible Navier-Stokes equations. The solution algorithm (SOLA) evolved from the marker-and-cell (MAC) method, and the code is highly vectorized for efficient performance on a Cray computer. The computational domain is discretized by a mesh of parallelepiped cells in either cartesian or cylindrical geometry. The primary hydrodynamic variables for approximating the solution of the momentum equations are cell-face-centered velocity components and cell-centered pressures. Spatial accuracy is selected by the user to be first or second order; the time differencing is first-order accurate. The incompressibility condition results in an elliptic equation for pressure that is solved by a conjugate gradient method. Boundary conditions of five general types may be chosen: free-slip, no-slip, continuative, periodic, and specified pressure. In addition, internal mesh specifications to model obstacles and walls are provided. SOLA-DM also solves the equations for discrete particle dynamics, permitting the transport of marker particles or other solid particles through the fluid to be modeled. 7 refs., 7 figs.

  19. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  20. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-01-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  1. Diffusion Acceleration Schemes for Self-Adjoint Angular Flux Formulation with a Void Treatment

    SciTech Connect

    Yaqi Wang; Hongbin Zhang; Richard C. Martineau

    2014-02-01

    A Galerkin weak form for the monoenergetic neutron transport equation with a continuous finite element method and discrete ordinate method is developed based on self-adjoint angular flux formulation. This weak form is modified for treating void regions. A consistent diffusion scheme is developed with projection. Correction terms of the diffusion scheme are derived to reproduce the transport scalar flux. A source iteration that decouples the solution of all directions with both linear and nonlinear diffusion accelerations is developed and demonstrated. One-dimensional Fourier analysis is conducted to demonstrate the stability of the linear and nonlinear diffusion accelerations. Numerical results of these schemes are presented.

  2. Receptivity in parallel flows: An adjoint approach

    NASA Technical Reports Server (NTRS)

    Hill, D. Christopher

    1993-01-01

    Linear receptivity studies in parallel flows are aimed at understanding how external forcing couples to the natural unstable motions which a flow can support. The vibrating ribbon problem models the original Schubauer and Skramstad boundary layer experiment and represents the classic boundary layer receptivity problem. The process by which disturbances are initiated in convectively-unstable jets and shear layers has also received attention. Gaster was the first to handle the boundary layer analysis with the recognition that spatial modes, rather than temporal modes, were relevant when studying convectively-unstable flows that are driven by a time-harmonic source. The amplitude of the least stable spatial mode, far downstream of the source, is related to the source strength by a coupling coefficient. The determination of this coefficient is at the heart of this type of linear receptivity study. The first objective of the present study was to determine whether the various wave number derivative factors, appearing in the coupling coefficients for linear receptivity problems, could be reexpressed in a simpler form involving adjoint eigensolutions. Secondly, it was hoped that the general nature of this simplification could be shown; indeed, a rather elegant characterization of the receptivity properties of spatial instabilities does emerge. The analysis is quite distinct from the usual Fourier-inversion procedures, although a detailed knowledge of the spectrum of the Orr-Sommerfeld equation is still required. Since the cylinder wake analysis proved very useful in addressing control considerations, the final objective was to provide a foundation upon which boundary layer control theory may be developed.

  3. Convection equation modeling: A non-iterative direct matrix solution algorithm for use with SINDA

    NASA Technical Reports Server (NTRS)

    Schrage, Dean S.

    1993-01-01

    The determination of the boundary conditions for a component-level analysis, applying discrete finite element and finite difference modeling techniques often requires an analysis of complex coupled phenomenon that cannot be described algebraically. For example, an analysis of the temperature field of a coldplate surface with an integral fluid loop requires a solution to the parabolic heat equation and also requires the boundary conditions that describe the local fluid temperature. However, the local fluid temperature is described by a convection equation that can only be solved with the knowledge of the locally-coupled coldplate temperatures. Generally speaking, it is not computationally efficient, and sometimes, not even possible to perform a direct, coupled phenomenon analysis of the component-level and boundary condition models within a single analysis code. An alternative is to perform a disjoint analysis, but transmit the necessary information between models during the simulation to provide an indirect coupling. For this approach to be effective, the component-level model retains full detail while the boundary condition model is simplified to provide a fast, first-order prediction of the phenomenon in question. Specifically for the present study, the coldplate structure is analyzed with a discrete, numerical model (SINDA) while the fluid loop convection equation is analyzed with a discrete, analytical model (direct matrix solution). This indirect coupling allows a satisfactory prediction of the boundary condition, while not subjugating the overall computational efficiency of the component-level analysis. In the present study a discussion of the complete analysis of the derivation and direct matrix solution algorithm of the convection equation is presented. Discretization is analyzed and discussed to extend of solution accuracy, stability and computation speed. Case studies considering a pulsed and harmonic inlet disturbance to the fluid loop are analyzed to

  4. Adjoint simulation of stream depletion due to aquifer pumping.

    PubMed

    Neupauer, Roseanna M; Griebling, Scott A

    2012-01-01

    If an aquifer is hydraulically connected to an adjacent stream, a pumping well operating in the aquifer will draw some water from aquifer storage and some water from the stream, causing stream depletion. Several analytical, semi-analytical, and numerical approaches have been developed to estimate stream depletion due to pumping. These approaches are effective if the well location is known. If a new well is to be installed, it may be desirable to install the well at a location where stream depletion is minimal. If several possible locations are considered for the location of a new well, stream depletion would have to be estimated for all possible well locations, which can be computationally inefficient. The adjoint approach for estimating stream depletion is a more efficient alternative because with one simulation of the adjoint model, stream depletion can be estimated for pumping at a well at any location. We derive the adjoint equations for a coupled system with a confined aquifer, an overlying unconfined aquifer, and a river that is hydraulically connected to the unconfined aquifer. We assume that the stage in the river is known, and is independent of the stream depletion, consistent with the assumptions of the MODFLOW river package. We describe how the adjoint equations can be solved using MODFLOW. In an illustrative example, we show that for this scenario, the adjoint approach is as accurate as standard forward numerical simulation methods, and requires substantially less computational effort.

  5. The discrete adjoint approach to aerodynamic shape optimization

    NASA Astrophysics Data System (ADS)

    Nadarajah, Siva Kumaran

    A viscous discrete adjoint approach to automatic aerodynamic shape optimization is developed, and the merits of the viscous discrete and continuous adjoint approaches are discussed. The viscous discrete and continuous adjoint gradients for inverse design and drag minimization cost functions are compared with finite-difference and complex-step gradients. The optimization of airfoils in two-dimensional flow for inverse design and drag minimization is illustrated. Both the discrete and continuous adjoint methods are used to formulate two new design problems. First, the time-dependent optimal design problem is established, and both the time accurate discrete and continuous adjoint equations are derived. An application to the reduction of the time-averaged drag coefficient while maintaining time-averaged lift and thickness distribution of a pitching airfoil in transonic flow is demonstrated. Second, the remote inverse design problem is formulated. The optimization of a three-dimensional biconvex wing in supersonic flow verifies the feasibility to reduce the near field pressure peak. Coupled drag minimization and remote inverse design cases produce wings with a lower drag and a reduced near field peak pressure signature.

  6. Efficient solution of the Euler and Navier-Stokes equations with a vectorized multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solutions to the Euler and Navier-Stokes equations is presented. The convergence of the explicit MacCormack algorithm on a fine grid is accelerated by propagating transients from the domain using a sequence of successively coarser grids. Both the fine and coarse grid schemes are readily vectorizable. The combination of multiple-gridding and vectorization results in substantially reduced computational times for the numerical solution of a wide range of flow problems. Results are presented for subsonic, transonic, and supersonic inviscid flows and for subsonic attached and separated laminar viscous flows. Work reduction factors over a scalar, single-grid algorithm range as high as 76.8. Previously announced in STAR as N83-24467

  7. Efficient solution of the Euler and Navier-Stokes equations with a vectorized multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solutions to the Euler and Navier-Stokes equations is presented. The convergence of the explicit MacCormack algorithm on a fine grid is accelerated by propagating transients from the domain using a sequence of successively coarser grids. Both the fine and coarse grid schemes are readily vectorizable. The combination of multiple-gridding and vectorization results in substantially reduced computational times for the numerical solution of a wide range of flow problems. Results are presented for subsonic, transonic, and supersonic inviscid flows and for subsonic attached and separated laminar viscous flows. Work reduction factors over a scalar, single-grid algorithm range as high as 76.8.

  8. A Multilevel Algorithm for the Solution of Second Order Elliptic Differential Equations on Sparse Grids

    NASA Technical Reports Server (NTRS)

    Pflaum, Christoph

    1996-01-01

    A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.

  9. A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion

    SciTech Connect

    Jenny, Patrick Torrilhon, Manuel; Heinz, Stefan

    2010-02-20

    In this paper, a stochastic model is presented to simulate the flow of gases, which are not in thermodynamic equilibrium, like in rarefied or micro situations. For the interaction of a particle with others, statistical moments of the local ensemble have to be evaluated, but unlike in molecular dynamics simulations or DSMC, no collisions between computational particles are considered. In addition, a novel integration technique allows for time steps independent of the stochastic time scale. The stochastic model represents a Fokker-Planck equation in the kinetic description, which can be viewed as an approximation to the Boltzmann equation. This allows for a rigorous investigation of the relation between the new model and classical fluid and kinetic equations. The fluid dynamic equations of Navier-Stokes and Fourier are fully recovered for small relaxation times, while for larger values the new model extents into the kinetic regime. Numerical studies demonstrate that the stochastic model is consistent with Navier-Stokes in that limit, but also that the results become significantly different, if the conditions for equilibrium are invalid. The application to the Knudsen paradox demonstrates the correctness and relevance of this development, and comparisons with existing kinetic equations and standard solution algorithms reveal its advantages. Moreover, results of a test case with geometrically complex boundaries are presented.

  10. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  11. The existence uniqueness and the fixed iterative algorithm of the solution for the discrete coupled algebraic Riccati equation

    NASA Astrophysics Data System (ADS)

    Liu, Jianzhou; Zhang, Juan

    2011-08-01

    In this article, applying the properties of M-matrix and non-negative matrix, utilising eigenvalue inequalities of matrix's sum and product, we firstly develop new upper and lower matrix bounds of the solution for discrete coupled algebraic Riccati equation (DCARE). Secondly, we discuss the solution existence uniqueness condition of the DCARE using the developed upper and lower matrix bounds and a fixed point theorem. Thirdly, a new fixed iterative algorithm of the solution for the DCARE is shown. Finally, the corresponding numerical examples are given to illustrate the effectiveness of the developed results.

  12. Adjoint-based shape optimization of fin geometry for enhanced solid/liquid phase-change process

    NASA Astrophysics Data System (ADS)

    Morimoto, Kenichi; Suzuki, Yuji

    2015-11-01

    In recent years, the control of heat transfer processes, which play a critical role in various engineering devices/systems, has gained renewed attention. The present study aims to establish an adjoint-based shape optimization method for high-performance heat transfer processes involving phase-change phenomena. A possible example includes the application to the thermal management technique using phase-change material. Adjoint-based shape optimization scheme is useful to optimal shape design and optimal control of systems, for which the base function of the solution is unknown and the solution includes an infinite number of degrees of freedom. Here we formulate the shape-optimization scheme based on adjoint heat conduction analyses, focusing on the shape optimization of fin geometry. In the computation of the developed scheme, a meshless local Petrov-Galerkin (MLPG) method that is suited for dealing with complex boundary geometry is employed, and the enthalpy method is adopted for analyzing the motion of the phase-change interface. We examine in detail the effect of the initial geometry and the node distribution in the MLPG analysis upon the final solution of the shape optimization. Also, we present a new strategy for the computation using bubble mesh.

  13. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each

  14. Analytical sensitivity analysis of transient groundwater flow in a bounded model domain using the adjoint method

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.

    2015-07-01

    Sensitivity analyses are an important component of any modeling exercise. We have developed an analytical methodology based on the adjoint method to compute sensitivities of a state variable (hydraulic head) to model parameters (hydraulic conductivity and storage coefficient) for transient groundwater flow in a confined and randomly heterogeneous aquifer under ambient and pumping conditions. For a special case of two-dimensional rectangular domains, these sensitivities are represented in terms of the problem configuration (the domain size, boundary configuration, medium properties, pumping schedules and rates, and observation locations and times), and there is no need to actually solve the adjoint equations. As an example, we present analyses of the obtained solution for typical groundwater flow conditions. Analytical solutions allow us to calculate sensitivities efficiently, which can be useful for model-based analyses such as parameter estimation, data-worth evaluation, and optimal experimental design related to sampling frequency and locations of observation wells. The analytical approach is not limited to groundwater applications but can be extended to any other mathematical problem with similar governing equations and under similar conceptual conditions.

  15. A Numerical Algorithm for Finding Solution of Cross-Coupled Algebraic Riccati Equations

    NASA Astrophysics Data System (ADS)

    Mukaidani, Hiroaki; Yamamoto, Seiji; Yamamoto, Toru

    In this letter, a computational approach for solving cross-coupled algebraic Riccati equations (CAREs) is investigated. The main purpose of this letter is to propose a new algorithm that combines Newton's method with a gradient-based iterative (GI) algorithm for solving CAREs. In particular, it is noteworthy that both a quadratic convergence under an appropriate initial condition and reduction in dimensions for matrix computation are both achieved. A numerical example is provided to demonstrate the efficiency of this proposed algorithm.

  16. Magnetic Field Separation Around Planets Using an Adjoint-Method Approach

    NASA Astrophysics Data System (ADS)

    Nabert, Christian; Glassmeier, Karl-Heinz; Heyner, Daniel; Othmer, Carsten

    The two spacecraft of the BepiColombo mission will reach planet Mercury in 2022. The magnetometers on-board these polar orbiting spacecraft will provide a detailed map of the magnetic field in Mercury's environment. Unfortunately, a separation of the magnetic field into internal and external parts using the classical Gauss-algorithm is not possible due to strong electric currents in the orbit region of the spacecraft. These currents are due to the interaction of the solar wind with Mercury's planetary magnetic field. We use an MHD code to simulate this interaction process. This requires a first choice of Mercury's planetary field which is used and modified until the simulation results fit to the actual measurements. This optimization process is carried out most efficiently using an adjoint-method. The adjoint-method is well known for its low computational cost in order to determine sensitivities required for the minimization. In a first step, the validity of our approach to separate magnetic field contributions into internal and external parts is demonstrated using synthetic generated data. Furthermore, we apply our approach to satellite measurements of the Earth's magnetic field. We can compare the results with the well known planetary field of the Earth to prove practical suitability.

  17. Ocean acoustic tomography from different receiver geometries using the adjoint method.

    PubMed

    Zhao, Xiaofeng; Wang, Dongxiao

    2015-12-01

    In this paper, an ocean acoustic tomography inversion using the adjoint method in a shallow water environment is presented. The propagation model used is an implicit Crank-Nicolson finite difference parabolic equation solver with a non-local boundary condition. Unlike previous matched-field processing works using the complex pressure fields as the observations, here, the observed signals are the transmission losses. Based on the code tests of the tangent linear model, the adjoint model, and the gradient, the optimization problem is solved by a gradient-based minimization algorithm. The inversions are performed in numerical simulations for two geometries: one in which hydrophones are sparsely distributed in the horizontal direction, and another in which the hydrophones are distributed vertically. The spacing in both cases is well beyond the half-wavelength threshold at which beamforming could be used. To deal with the ill-posedness of the inverse problem, a linear differential regularization operator of the sound-speed profile is used to smooth the inversion results. The L-curve criterion is adopted to select the regularization parameter, and the optimal value can be easily determined at the elbow of the logarithms of the residual norm of the measured-predicted fields and the norm of the penalty function.

  18. Hybrid algorithm: A cost efficient solution for ONU placement in Fiber-Wireless (FiWi) network

    NASA Astrophysics Data System (ADS)

    Bhatt, Uma Rathore; Chouhan, Nitin; Upadhyay, Raksha

    2015-03-01

    Fiber-Wireless (FiWi) network is a promising access technology as it integrates the technical merits of optical and wireless access networks. FiWi provides large bandwidth and high stability of optical network and lower cost of wireless network respectively. Therefore, FiWi gives users to access broadband services in an "anywhere-anytime" way. One of the key issues in FiWi network is its deployment cost, which depends on the number of ONUs in the network. Therefore optimal placement of ONUs is desirable to design a cost effective network. In this paper, we propose an algorithm for optimal placement of ONUs. First we place an ONU in the center of each grid then we form a set of wireless routers associated with each ONU according to wireless hop number. The number of ONUs are minimized in such a way, that all the wireless routers can communicate to at least one of the ONUs. The number of ONUs in the network further reduced by using genetic algorithm. The effectiveness of the proposed algorithm is tested by considering Internet traffic as well as peer-to-peer (p2p) traffic in the network, which is a current need. Simulation results show that the proposed algorithm is better than existing algorithms in minimizing number of ONUs in the network for both types of traffics. Hence proposed algorithm offers cost effective solution to design the FiWi network.

  19. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence. PMID:17649913

  20. Investigation of the Solution Space of Marine Controlled-Source Electromagnetic Inversion Problems By Using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Hunziker, J.; Thorbecke, J.; Slob, E. C.

    2014-12-01

    Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic

  1. Non-self-adjoint hamiltonians defined by Riesz bases

    SciTech Connect

    Bagarello, F.; Inoue, A.; Trapani, C.

    2014-03-15

    We discuss some features of non-self-adjoint Hamiltonians with real discrete simple spectrum under the assumption that the eigenvectors form a Riesz basis of Hilbert space. Among other things, we give conditions under which these Hamiltonians can be factorized in terms of generalized lowering and raising operators.

  2. Adjoint electron-photon transport Monte Carlo calculations with ITS

    SciTech Connect

    Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.; Morel, J.E.

    1995-02-01

    A general adjoint coupled electron-photon Monte Carlo code for solving the Boltzmann-Fokker-Planck equation has recently been created. It is a modified version of ITS 3.0, a coupled electronphoton Monte Carlo code that has world-wide distribution. The applicability of the new code to radiation-interaction problems of the type found in space environments is demonstrated.

  3. Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"

    EPA Science Inventory

    Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...

  4. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  5. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.

  6. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.

  7. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  8. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  9. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  10. Marble Algorithm: a solution to estimating ecological niches from presence-only records

    PubMed Central

    Qiao, Huijie; Lin, Congtian; Jiang, Zhigang; Ji, Liqiang

    2015-01-01

    We describe an algorithm that helps to predict potential distributional areas for species using presence-only records. The Marble Algorithm is a density-based clustering program based on Hutchinson’s concept of ecological niches as multidimensional hypervolumes in environmental space. The algorithm characterizes this niche space using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. When MA is provided with a set of occurrence points in environmental space, the algorithm determines two parameters that allow the points to be grouped into several clusters. These clusters are used as reference sets describing the ecological niche, which can then be mapped onto geographic space and used as the potential distribution of the species. We used both virtual species and ten empirical datasets to compare MA with other distribution-modeling tools, including Bioclimate Analysis and Prediction System, Environmental Niche Factor Analysis, the Genetic Algorithm for Rule-set Production, Maximum Entropy Modeling, Artificial Neural Networks, Climate Space Models, Classification Tree Analysis, Generalised Additive Models, Generalised Boosted Models, Generalised Linear Models, Multivariate Adaptive Regression Splines and Random Forests. Results indicate that MA predicts potential distributional areas with high accuracy, moderate robustness, and above-average transferability on all datasets, particularly when dealing with small numbers of occurrences. PMID:26387771

  11. Aerodynamic Shape Optimization of Complex Aircraft Configurations via an Adjoint Formulation

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Farmer, James; Martinelli, Luigi; Saunders, David

    1996-01-01

    This work describes the implementation of optimization techniques based on control theory for complex aircraft configurations. Here control theory is employed to derive the adjoint differential equations, the solution of which allows for a drastic reduction in computational costs over previous design methods (13, 12, 43, 38). In our earlier studies (19, 20, 22, 23, 39, 25, 40, 41, 42) it was shown that this method could be used to devise effective optimization procedures for airfoils, wings and wing-bodies subject to either analytic or arbitrary meshes. Design formulations for both potential flows and flows governed by the Euler equations have been demonstrated, showing that such methods can be devised for various governing equations (39, 25). In our most recent works (40, 42) the method was extended to treat wing-body configurations with a large number of mesh points, verifying that significant computational savings can be gained for practical design problems. In this paper the method is extended for the Euler equations to treat complete aircraft configurations via a new multiblock implementation. New elements include a multiblock-multigrid flow solver, a multiblock-multigrid adjoint solver, and a multiblock mesh perturbation scheme. Two design examples are presented in which the new method is used for the wing redesign of a transonic business jet.

  12. Adjoint Tomography of Taiwan Region: From Travel-Time Toward Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Huang, H. H.; Lee, S. J.; Tromp, J.

    2014-12-01

    The complicated tectonic environment such as Taiwan region can modulate the seismic waveform severely and hamper the discrimination and the utilization of later phases. Restricted to the use of only first arrivals of P- and S-wave, the travel-time tomographic models of Taiwan can simulate the seismic waveform barely to a frequency of 0.2 Hz to date. While it has been sufficient for long-period studies, e.g. source inversion, this frequency band is still far from the applications to the community and high-resolution studies. To achieve a higher-frequency simulation, more data and the considerations of off-path and finite-frequency effects are necessary. Based on the spectral-element and the adjoint method recently developed, we prepared 94 MW 3.5-6.0 earthquakes with well-defined location and focal mechanism solutions from Real-Time Moment Tensor Monitoring System (RMT), and preformed an iterative gradient-based inversion employing waveform modeling and finite-frequency measurements of adjoint method. By which the 3-D sensitivity kernels are taken into account realistically and the full waveform information are naturally sought, without a need of any phase pick. A preliminary model m003 using 10-50 sec data was demonstrated and compared with previous travel-time models. The primary difference appears in the mountainous area, where the previous travel-time model may underestimate the S-wave speed in the upper crust, but overestimates in the lower crust.

  13. Adjoint Optimization of Multistage Axial Compressor Blades with Static Pressure Constraint at Blade Row Interface

    NASA Astrophysics Data System (ADS)

    Yu, Jia; Ji, Lucheng; Li, Weiwei; Yi, Weilin

    2016-06-01

    Adjoint method is an important tool for design refinement of multistage compressors. However, the radial static pressure distribution deviates during the optimization procedure and deteriorates the overall performance, producing final designs that are not well suited for realistic engineering applications. In previous development work on multistage turbomachinery blade optimization using adjoint method and thin shear-layer N-S equations, the entropy production is selected as the objective function with given mass flow rate and total pressure ratio as imposed constraints. The radial static pressure distribution at the interfaces between rows is introduced as a new constraint in the present paper. The approach is applied to the redesign of a five-stage axial compressor, and the results obtained with and without the constraint on the radial static pressure distribution at the interfaces between rows are discussed in detail. The results show that the redesign without the radial static pressure distribution constraint (RSPDC) gives an optimal solution that shows deviations on radial static pressure distribution, especially at rotor exit tip region. On the other hand, the redesign with the RSPDC successfully keeps the radial static pressure distribution at the interfaces between rows and make sure that the optimization results are applicable in a practical engineering design.

  14. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the

  15. Reasons why current speech-enhancement algorithms do not improve speech intelligibility and suggested solutions

    PubMed Central

    Loizou, Philipos C.; Kim, Gibak

    2011-01-01

    Existing speech enhancement algorithms can improve speech quality but not speech intelligibility, and the reasons for that are unclear. In the present paper, we present a theoretical framework that can be used to analyze potential factors that can influence the intelligibility of processed speech. More specifically, this framework focuses on the fine-grain analysis of the distortions introduced by speech enhancement algorithms. It is hypothesized that if these distortions are properly controlled, then large gains in intelligibility can be achieved. To test this hypothesis, intelligibility tests are conducted with human listeners in which we present processed speech with controlled speech distortions. The aim of these tests is to assess the perceptual effect of the various distortions that can be introduced by speech enhancement algorithms on speech intelligibility. Results with three different enhancement algorithms indicated that certain distortions are more detrimental to speech intelligibility degradation than others. When these distortions were properly controlled, however, large gains in intelligibility were obtained by human listeners, even by spectral-subtractive algorithms which are known to degrade speech quality and intelligibility. PMID:21909285

  16. Current algorithmic solutions for peptide-based proteomics data generation and identification.

    PubMed

    Hoopmann, Michael R; Moritz, Robert L

    2013-02-01

    Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics.

  17. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  18. Monte Carlo solution methods in a moment-based scale-bridging algorithm for thermal radiative transfer problems: Comparison with Fleck and Cummings

    SciTech Connect

    Park, H.; Densmore, J. D.; Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M.

    2013-07-01

    We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)

  19. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  20. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  1. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  2. On improving storm surge forecasting using an adjoint optimal technique

    NASA Astrophysics Data System (ADS)

    Li, Yineng; Peng, Shiqiu; Yan, Jing; Xie, Lian

    2013-12-01

    A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.

  3. Solution of basic tasks in eclipsing binary period analysis by genetic and LSM algorithms

    NASA Astrophysics Data System (ADS)

    Chrastina, M.; Mikulášek, Z.; Zejda, M.

    2014-03-01

    A period analysis of eclipsing binaries can be performed effectively when using fine-tuned phenomenological models. The combination of a regression analysis and genetic algorithms is a powerful tool for such astrophysical tasks as light curve analysis, mid-eclipse time determination and O-C diagram investigation — even the apsidal motion and the light time effect can be resolved.

  4. A comparison of adjoint and data-centric verification techniques.

    SciTech Connect

    Wildey, Timothy Michael; Cyr, Eric C; Shadid, John N; Pawlowski, Roger P; Smith, Thomas Michael

    2013-03-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. We compare the adjoint-based a posteriori error estimation approach with a recent variant of a data-centric verification technique. We provide a brief overview of each technique and then we discuss their relative advantages and disadvantages. We use Drekar::CFD to produce numerical results for steady-state Navier Stokes and SARANS approximations. 3

  5. Seismic Window Selection and Misfit Measurements for Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lei, W.; Bozdag, E.; Lefebvre, M.; Podhorszki, N.; Smith, J. A.; Tromp, J.

    2013-12-01

    Global Adjoint Tomography requires fast parallel processing of large datasets. After obtaing the preprocessed observed and synthetic seismograms, we use the open source software packages FLEXWIN (Maggi et al. 2007) to select time windows and MEASURE_ADJ to make measurements. These measurements define adjoint sources for data assimilation. Previous versions of these tools work on a pair of SAC files---observed and synthetic seismic data for the same component and station, and loop over all seismic records associated with one earthquake. Given the large number of stations and earthquakes, the frequent read and write operations create severe I/O bottlenecks on modern computing platforms. We present new versions of these tools utilizing a new seismic data format, namely the Adaptive Seismic Data Format(ASDF). This new format shows superior scalability for applications on high-performance computers and accommodates various types of data, including earthquake, industry and seismic interferometry datasets. ASDF also provides user-friendly APIs, which can be easily integrated into the adjoint tomography workflow and combined with other data processing tools. In addition to solving the I/O bottleneck, we are making several improvements to these tools. For example, FLEXWIN is tuned to select windows for different types of earthquakes. To capture their distinct features, we categorize earthquakes by their depths and frequency bands. Moreover, instead of only picking phases between the first P arrival and the surface-wave arrivals, our aim is to select and assimilate many other later prominent phases in adjoint tomography. For example, in the body-wave band (17 s - 60 s), we include SKS, sSKS and their multiple, while in the surface-wave band (60 s - 120 s) we incorporate major-arc surface waves.

  6. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    SciTech Connect

    Hep, J.; Konecna, A.; Krysl, V.; Smutny, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weighting is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV

  7. A planning model with a solution algorithm for ready mixed concrete production and truck dispatching under stochastic travel times

    NASA Astrophysics Data System (ADS)

    Yan, S.; Lin, H. C.; Jiang, X. Y.

    2012-04-01

    In this study the authors employ network flow techniques to construct a systematic model that helps ready mixed concrete carriers effectively plan production and truck dispatching schedules under stochastic travel times. The model is formulated as a mixed integer network flow problem with side constraints. Problem decomposition and relaxation techniques, coupled with the CPLEX mathematical programming solver, are employed to develop an algorithm that is capable of efficiently solving the problems. A simulation-based evaluation method is also proposed to evaluate the model, coupled with a deterministic model, and the method currently used in actual operations. Finally, a case study is performed using real operating data from a Taiwan RMC firm. The test results show that the system operating cost obtained using the stochastic model is a significant improvement over that obtained using the deterministic model or the manual approach. Consequently, the model and the solution algorithm could be useful for actual operations.

  8. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  9. Spectral monodromy of non-self-adjoint operators

    SciTech Connect

    Phan, Quang Sang

    2014-01-15

    In the present paper, we build a combinatorial invariant, called the “spectral monodromy” from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc [“Quantum monodromy in integrable systems,” Commun. Math. Phys. 203(2), 465–479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat [“On global action-angle coordinates,” Commun. Pure Appl. Math. 33(6), 687–706 (1980)].

  10. Optimization of a neutron detector design using adjoint transport simulation

    SciTech Connect

    Yi, C.; Manalo, K.; Huang, M.; Chin, M.; Edgar, C.; Applegate, S.; Sjoden, G.

    2012-07-01

    A synthetic aperture approach has been developed and investigated for Special Nuclear Materials (SNM) detection in vehicles passing a checkpoint at highway speeds. SNM is postulated to be stored in a moving vehicle and detector assemblies are placed on the road-side or in chambers embedded below the road surface. Neutron and gamma spectral awareness is important for the detector assembly design besides high efficiencies, so that different SNMs can be detected and identified with various possible shielding settings. The detector assembly design is composed of a CsI gamma-ray detector block and five neutron detector blocks, with peak efficiencies targeting different energy ranges determined by adjoint simulations. In this study, formulations are derived using adjoint transport simulations to estimate detector efficiencies. The formulations is applied to investigate several neutron detector designs for Block IV, which has its peak efficiency in the thermal range, and Block V, designed to maximize the total neutron counts over the entire energy spectrum. Other Blocks detect different neutron energies. All five neutron detector blocks and the gamma-ray block are assembled in both MCNP and deterministic simulation models, with detector responses calculated to validate the fully assembled design using a 30-group library. The simulation results show that the 30-group library, collapsed from an 80-group library using an adjoint-weighting approach with the YGROUP code, significantly reduced the computational cost while maintaining accuracy. (authors)

  11. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    PubMed

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness. PMID:27006888

  12. Nonlinear self-adjointness and conservation laws of Klein-Gordon-Fock equation with central symmetry

    NASA Astrophysics Data System (ADS)

    Abdulwahhab, Muhammad Alim

    2015-05-01

    The concept of nonlinear self-adjointness, introduced by Ibragimov, has significantly extends approaches to constructing conservation laws associated with symmetries since it incorporates the strict self-adjointness, the quasi self-adjointness as well as the usual linear self-adjointness. Using this concept, the nonlinear self-adjointness condition for the Klein-Gordon-Fock equation was established and subsequently used to construct simplified but infinitely many nontrivial and independent conserved vectors. The Noether's theorem was further applied to the Klein-Gordon-Fock equation to explore more distinct first integrals, result shows that conservation laws constructed through this approach are exactly the same as those obtained under strict self-adjointness of Ibragimov's method.

  13. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  14. Solution algorithm of a quasi-Lambert's problem with fixed flight-direction angle constraint

    NASA Astrophysics Data System (ADS)

    Luo, Qinqin; Meng, Zhanfeng; Han, Chao

    2011-04-01

    A two-point boundary value problem of the Kepler orbit similar to Lambert's problem is proposed. The problem is to find a Kepler orbit that will travel through the initial and final points in a specified flight time given the radial distances of the two points and the flight-direction angle at the initial point. The Kepler orbits that meet the geometric constraints are parameterized via the universal variable z introduced by Bate. The formula for flight time of the orbits is derived. The admissible interval of the universal variable and the variation pattern of the flight time are explored intensively. A numerical iteration algorithm based on the analytical results is presented to solve the problem. A large number of randomly generated examples are used to test the reliability and efficiency of the algorithm.

  15. Adjoint-Based Sensitivity Maps for the Nearshore

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Ngodock, Hans

    2013-04-01

    The wave model SWAN (Booij et al., 1999) solves the spectral action balance equation to produce nearshore wave forecasts and climatologies. It is widely used by the coastal modeling community and is part of a variety of coupled ocean-wave-atmosphere model systems. A variational data assimilation system (Orzech et al., 2013) has recently been developed for SWAN and is presently being transitioned to operational use by the U.S. Naval Oceanographic Office. This system is built around a numerical adjoint to the fully nonlinear, nonstationary SWAN code. When provided with measured or artificial "observed" spectral wave data at a location of interest on a given nearshore bathymetry, the adjoint can compute the degree to which spectral energy levels at other locations are correlated with - or "sensitive" to - variations in the observed spectrum. Adjoint output may be used to construct a sensitivity map for the entire domain, tracking correlations of spectral energy throughout the grid. When access is denied to the actual locations of interest, sensitivity maps can be used to determine optimal alternate locations for data collection by identifying regions of greatest sensitivity in the mapped domain. The present study investigates the properties of adjoint-generated sensitivity maps for nearshore wave spectra. The adjoint and forward SWAN models are first used in an idealized test case at Duck, NC, USA, to demonstrate the system's effectiveness at optimizing forecasts of shallow water wave spectra for an inaccessible surf-zone location. Then a series of simulations is conducted for a variety of different initializing conditions, to examine the effects of seasonal changes in wave climate, errors in bathymetry, and variations in size and shape of the inaccessible region of interest. Model skill is quantified using two methods: (1) a more traditional correlation of observed and modeled spectral statistics such as significant wave height, and (2) a recently developed RMS

  16. Using Adjoint Methods to Improve 3-D Velocity Models of Southern California

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.

    2006-12-01

    We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical

  17. Modeling Finite Faults Using the Adjoint Wave Field

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, V.; Liu, Q.; Tromp, J.

    2004-12-01

    Time-reversal acoustics, a technique in which an acoustic signal is recorded by an array of transducers, time-reversed, and retransmitted, is used, e.g., in medical therapy to locate and destroy gallstones (for a review see Fink, 1997). As discussed by Tromp et al. (2004), time-reversal techniques for locating sources are closely linked to so-called `adjoint methods' (Talagrand and Courtier, 1987), which may be used to evaluate the gradient of a misfit function. Tromp et al. (2004) illustrate how a (finite) source inversion may be implemented based upon the adjoint wave field by writing the change in the misfit function, δ χ, due to a change in the moment-density tensor, δ m, as an integral of the adjoint strain field ɛ x,t) over the fault plane Σ : δ χ = ∫ 0T∫_Σ ɛ x,T-t) :δ m(x,t) d2xdt. We find that if the real fault plane is located at a distance δ h in the direction of the fault normal hat n, then to first order an additional factor of ∫ 0T∫_Σ δ h (x) ∂ n ɛ x,T-t):m(x,t) d2xdt is added to the change in the misfit function. The adjoint strain is computed by using the time-reversed difference between data and synthetics recorded at all receivers as simultaneous sources and recording the resulting strain on the fault plane. In accordance with time-reversal acoustics, all the resulting waves will constructively interfere at the position of the original source in space and time. The level of convergence will be deterimined by factors such as the source-receiver geometry, the frequency of the recorded data and synthetics, and the accuracy of the velocity structure used when back propagating the wave field. The terms ɛ x,T-t) and ∂ n ɛ x,T-t):m(x,t) can be viewed as sensitivity kernels for the moment density and the faultplane location respectively. By looking at these quantities we can make an educated choice of fault parametrization given the data in hand. The process can then be repeated to invert for the best source model, as

  18. A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems

    NASA Astrophysics Data System (ADS)

    Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos

    2016-02-01

    A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.

  19. The Crystal-T algorithm: a new approach to calculate the SLE of lipidic mixtures presenting solid solutions.

    PubMed

    Maximo, Guilherme J; Costa, Mariana C; Meirelles, Antonio J A

    2014-08-21

    Lipidic mixtures present a particular phase change profile highly affected by their unique crystalline structure. However, classical solid-liquid equilibrium (SLE) thermodynamic modeling approaches, which assume the solid phase to be a pure component, sometimes fail in the correct description of the phase behavior. In addition, their inability increases with the complexity of the system. To overcome some of these problems, this study describes a new procedure to depict the SLE of fatty binary mixtures presenting solid solutions, namely the "Crystal-T algorithm". Considering the non-ideality of both liquid and solid phases, this algorithm is aimed at the determination of the temperature in which the first and last crystal of the mixture melts. The evaluation is focused on experimental data measured and reported in this work for systems composed of triacylglycerols and fatty alcohols. The liquidus and solidus lines of the SLE phase diagrams were described by using excess Gibbs energy based equations, and the group contribution UNIFAC model for the calculation of the activity coefficients of both liquid and solid phases. Very low deviations of theoretical and experimental data evidenced the strength of the algorithm, contributing to the enlargement of the scope of the SLE modeling.

  20. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  1. A weighted adjoint-source for weight-window generation by means of a linear tally combination

    SciTech Connect

    Sood, Avneet; Booth, Thomas E; Solomon, Clell J

    2009-01-01

    A new importance estimation technique has been developed that allows weight-window optimization for a linear combination of tallies. This technique has been implemented in a local version of MCNP and effectively weights the adjoint source term for each tally in the combination. Optimizing weight window parameters for the linear tally combination allows the user to optimize weight windows for multiple regions at once. In this work, we present our results of solutions to an analytic three-tally-region test problem and a flux calculation on a 100,000 voxel oil-well logging tool problem.

  2. Nonlinear Acceleration of a Continuous Finite Element Discretization of the Self-Adjoint Angular Flux Form of the Transport Equation

    SciTech Connect

    Richard Sanchez; Cristian Rabiti; Yaqi Wang

    2013-11-01

    Nonlinear acceleration of a continuous finite element (CFE) discretization of the transport equation requires a modification of the transport solution in order to achieve local conservation, a condition used in nonlinear acceleration to define the stopping criterion. In this work we implement a coarse-mesh finite difference acceleration for a CFE discretization of the second-order self-adjoint angular flux (SAAF) form of the transport equation and use a postprocessing to enforce local conservation. Numerical results are given for one-group source calculations of one-dimensional slabs. We also give a novel formal derivation of the boundary conditions for the SAAF.

  3. Numerical solution of an optimal control problem governed by three-phase non-isothermal flow equations

    NASA Astrophysics Data System (ADS)

    Temirbekov, Nurlan M.; Baigereyev, Dossan R.

    2016-08-01

    The paper focuses on the numerical implementation of a model optimal control problem governed by equations of three-phase non-isothermal flow in porous media. The objective is to achieve preassigned temperature distribution along the reservoir at a given time of development by controlling mass flow rate of heat transfer agent on the injection well. The problem of optimal control is formulated, the adjoint problem is presented, and an algorithm for the numerical solution is proposed. Results of computational experiments are presented for a test problem.

  4. Gigaflop speed algorithm for the direct solution of large block-tridiagonal systems in 3-D physics applications

    SciTech Connect

    Anderson, D.V.; Fry, A.R.; Gruber, R.; Roy, A.

    1989-03-01

    In the discretization of the 3-D partial differential equations of many physics problems, it is found that the resultant system of linear equations can be represented by a block tridiagonal matrix. Depending on the substructure of the blocks, one can devise many algorithms for the solution of these systems. For plasma physics problems of interest to the authors, several interesting matrix problems arise that should be useful in other applications as well. In one case, where the blocks are dense, it was found that by using a multitasked cyclic reduction procedure, it was possible to reach gigaflop rates on a Cray-2 for the direct solve of these large linear systems. The recently built code PAMS (parallelized matrix solver) embodies this technique and uses fast vendor-supplied routines and obtains this good performance. Manipulations within the blocks are done by these highly optimized linear algebra subroutines that exploit vectorization as well as overlap of the functional units within each CPU. In unitasking mode, speeds well above 340 Mflops have been measured. The cyclic reduction method multitasks quite well with overlap factors in the range of three to four. In multitasking mode, average speeds of 1.1 gigaflops have been measured for the entire PAMS algorithm. In addition to the presentation of the PAMS algorithm, it is shown how related systems having banded blocks may be treated efficiently by multitasked cyclic reduction in the Cray-2 multiprocessor environment. The PAMS method is intended for multiprocessors and would not be a method of choice on a uniprocessor. Furthermore, this method's advantage was found to be critically dependent on the hardware, software, and charging algorithm installed on any given multiprocessor system.

  5. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  6. Applications of an adaptive unstructured solution algorithm to the analysis of high speed flows

    NASA Technical Reports Server (NTRS)

    Thareja, R. R.; Prabhu, R. K.; Morgan, K.; Peraire, J.; Peiro, J.

    1990-01-01

    An upwind cell-centered scheme for the solution of steady laminar viscous high-speed flows is implemented on unstructured two-dimensional meshes. The first-order implementation employs Roe's (1981) approximate Riemann solver, and a higher-order extension is produced by using linear reconstruction with limiting. The procedure is applied to the solution of inviscid subsonic flow over an airfoil, inviscid supersonic flow past a cylinder, and viscous hypersonic flow past a double ellipse. A detailed study is then made of a hypersonic laminar viscous flow on a 24-deg compression corner. It is shown that good agreement is achieved with previous predictions using finite-difference and finite-volume schemes. However, these predictions do not agree with experimental observations. With refinement of the structured grid at the leading edge, good agreement with experimental observations for the distributions of wall pressure, heating rate and skin friction is obtained.

  7. Conical intersections in solution: Formulation, algorithm, and implementation with combined quantum mechanics/molecular mechanics method

    NASA Astrophysics Data System (ADS)

    Cui, Ganglong; Yang, Weitao

    2011-05-01

    The significance of conical intersections in photophysics, photochemistry, and photodissociation of polyatomic molecules in gas phase has been demonstrated by numerous experimental and theoretical studies. Optimization of conical intersections of small- and medium-size molecules in gas phase has currently become a routine optimization process, as it has been implemented in many electronic structure packages. However, optimization of conical intersections of small- and medium-size molecules in solution or macromolecules remains inefficient, even poorly defined, due to large number of degrees of freedom and costly evaluations of gradient difference and nonadiabatic coupling vectors. In this work, based on the sequential quantum mechanics and molecular mechanics (QM/MM) and QM/MM-minimum free energy path methods, we have designed two conical intersection optimization methods for small- and medium-size molecules in solution or macromolecules. The first one is sequential QM conical intersection optimization and MM minimization for potential energy surfaces; the second one is sequential QM conical intersection optimization and MM sampling for potential of mean force surfaces, i.e., free energy surfaces. In such methods, the region where electronic structures change remarkably is placed into the QM subsystem, while the rest of the system is placed into the MM subsystem; thus, dimensionalities of gradient difference and nonadiabatic coupling vectors are decreased due to the relatively small QM subsystem. Furthermore, in comparison with the concurrent optimization scheme, sequential QM conical intersection optimization and MM minimization or sampling reduce the number of evaluations of gradient difference and nonadiabatic coupling vectors because these vectors need to be calculated only when the QM subsystem moves, independent of the MM minimization or sampling. Taken together, costly evaluations of gradient difference and nonadiabatic coupling vectors in solution or

  8. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  9. The development of three-dimensional adjoint method for flow control with blowing in convergent-divergent nozzle flows

    NASA Astrophysics Data System (ADS)

    Sikarwar, Nidhi

    multiple experiments or numerical simulations. Alternatively an inverse design method can be used. An adjoint optimization method can be used to achieve the optimum blowing rate. It is shown that the method works for both geometry optimization and active control of the flow in order to deflect the flow in desirable ways. An adjoint optimization method is described. It is used to determine the blowing distribution in the diverging section of a convergent-divergent nozzle that gives a desired pressure distribution in the nozzle. Both the direct and adjoint problems and their associated boundary conditions are developed. The adjoint method is used to determine the blowing distribution required to minimize the shock strength in the nozzle to achieve a known target pressure and to achieve close to an ideally expanded flow pressure. A multi-block structured solver is developed to calculate the flow solution and associated adjoint variables. Two and three-dimensional calculations are performed for internal and external of the nozzle domains. A two step MacCormack scheme based on predictor- corrector technique is was used for some calculations. The four and five stage Runge-Kutta schemes are also used to artificially march in time. A modified Runge-Kutta scheme is used to accelerate the convergence to a steady state. Second order artificial dissipation has been added to stabilize the calculations. The steepest decent method has been used for the optimization of the blowing velocity after the gradients of the cost function with respect to the blowing velocity are calculated using adjoint method. Several examples are given of the optimization of blowing using the adjoint method.

  10. A self-adjoint decomposition of the radial momentum operator

    NASA Astrophysics Data System (ADS)

    Liu, Q. H.; Xiao, S. F.

    2015-12-01

    With acceptance of the Dirac's observation that the canonical quantization entails using Cartesian coordinates, we examine the operator erPr rather than Pr itself and demonstrate that there is a decomposition of erPr into a difference of two self-adjoint but noncommutative operators, in which one is the total momentum and another is the transverse one. This study renders the operator Pr indirectly measurable and physically meaningful, offering an explanation of why the mean value of Pr over a quantum mechanical state makes sense and supporting Dirac's claim that Pr "is real and is the true momentum conjugate to r".

  11. Examining Tropical Cyclone - Kelvin Wave Interactions using Adjoint Diagnostics

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Doyle, J. D.; Hong, X.

    2015-12-01

    Adjoint-based tools can provide valuable insight into the mechanisms that influence the evolution and predictability of atmospheric phenomena, as they allow for the efficient and rigorous computation of forecast sensitivity to changes in the initial state. We apply adjoint-based tools from the non-hydrostatic Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) to explore the initial-state sensitivity and interactions between a tropical cyclone and atmospheric equatorial waves associated with the Madden Julian Oscillation (MJO) in the Indian Ocean during the DYNAMO field campaign. The development of Tropical Cyclone 5 (TC05) coincided with the passage of an equatorial Kelvin wave and westerly wind burst associated with an MJO that developed in the Indian Ocean in late November 2011, but it was unclear if and how one affected the other. COAMPS 24-h and 36-h adjoint sensitivities are analyzed for both TC05 and the equatorial waves to understand how the evolution of each system is sensitive to the other. The sensitivity of equatorial westerlies in the western Indian Ocean on 23 November shares characteristics with the classic Gill (1980) Rossby and Kelvin wave response to symmetric heating about the equator, including symmetric cyclonic circulations to the north and south of the westerlies, and enhanced heating in the area of convergence between the equatorial westerlies and easterlies. In addition, there is sensitivity in the Bay of Bengal associated with the cyclonic circulation that eventually develops into TC05. At the same time, the developing TC05 system shows strongest sensitivity to local wind and heating perturbations, but sensitivity to the equatorial westerlies is also clear. On 24 November, when the Kelvin wave is immediately south of the developing tropical cyclone, both phenomena are sensitive to each other. On 25 November TC05 no longer shows sensitivity to the Kelvin wave, while the Kelvin Wave still exhibits some weak sensitivity to TC05. In

  12. Advances in Global Adjoint Tomography -- Massive Data Assimilation

    NASA Astrophysics Data System (ADS)

    Ruan, Y.; Lei, W.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Krischer, L.; Tromp, J.

    2015-12-01

    Azimuthal anisotropy and anelasticity are key to understanding a myriad of processes in Earth's interior. Resolving these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models and an iterative inversion strategy. In the wake of successes in regional studies(e.g., Chen et al., 2007; Tape et al., 2009, 2010; Fichtner et al., 2009, 2010; Chen et al.,2010; Zhu et al., 2012, 2013; Chen et al., 2015), we are employing adjoint tomography based on a spectral-element method (Komatitsch & Tromp 1999, 2002) on a global scale using the supercomputer ''Titan'' at Oak Ridge National Laboratory. After 15 iterations, we have obtained a high-resolution transversely isotropic Earth model (M15) using traveltime data from 253 earthquakes. To obtain higher resolution images of the emerging new features and to prepare the inversion for azimuthal anisotropy and anelasticity, we expanded the original dataset with approximately 4,220 additional global earthquakes (Mw5.5-7.0) --occurring between 1995 and 2014-- and downloaded 300-minute-long time series for all available data archived at the IRIS Data Management Center, ORFEUS, and F-net. Ocean Bottom Seismograph data from the last decade are also included to maximize data coverage. In order to handle the huge dataset and solve the I/O bottleneck in global adjoint tomography, we implemented a python-based parallel data processing workflow based on the newly developed Adaptable Seismic Data Format (ASDF). With the help of the data selection tool MUSTANG developed by IRIS, we cleaned our dataset and assembled event-based ASDF files for parallel processing. We have started Centroid Moment Tensors (CMT) inversions for all 4,220 earthquakes with the latest model M15, and selected high-quality data for measurement. We will statistically investigate each channel using synthetic seismograms calculated in M15 for updated CMTs and identify problematic channels. In addition to data screening, we also modified

  13. A deflation based parallel algorithm for spectral element solution of the incompressible Navier-Stokes equations

    SciTech Connect

    Fischer, P.F.

    1996-12-31

    Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.

  14. SOLA-VOF: A solution algorithm for transient fluid flow with multiple free boundaries

    NASA Astrophysics Data System (ADS)

    Nichols, B. D.; Hirt, C. W.; Hotchkiss, R. S.

    1980-08-01

    A computer program is presented for the solution of two dimensional transient fluid flow with free boundaries. The SOLA-VOF program, which is based on the concept of a fractional volume of fluid is more flexible and efficient than other methods for treating arbitrary free boundaries. Its basic model of operation is for single fluid calculations having multiple free surfaces. However, SOLA-VOF can also be used for calculations involving two fluids separated by a sharp interface. In either case, the fluids may be treated as incompressible or as having limited compressibility. Surface tension forces with wall adhesion are permitted in both cases.

  15. A globally convergent algorithm for the solution of the steady-state semiconductor device equations

    NASA Astrophysics Data System (ADS)

    Korman, Can E.; Mayergoyz, Isaak D.

    1990-08-01

    An iterative method for solving the discretized steady-state semiconductor device equations is presented. This method uses Gummel's block iteration technique to decouple the nonlinear Poisson and electron-hole current continuity equations. However, the main feature of this method is that it takes advantage of the diagonal nonlinearity of the discretized equations, and solves each equation iteratively by using the nonlinear Jacobi method. Using the fact that the diagonal nonlinearities are monotonically increasing functions, it is shown that this method has two important advantages. First, it has global convergence, i.e., convergence is guaranteed for any initial guess. Second, the solution of simultaneous algebraic equations is avoided by updating the value of the electrostatic and quasi-Fermi potentials at each mesh point by means of explicit formulae. This allows the implementation of this method on computers with small random access memories, such as personal computers, and also makes it very attractive to use on parallel processor machines. Furthermore, for serial computations, this method is generalized to the faster nonlinear successive overrelaxation method which has global convergence as well. The iterative solution of the nonlinear Poisson equation is formulated with energy- and position-dependent interface traps. It is shown that the iterative method is globally convergent for arbitrary distributions of interface traps. This is an important step in analyzing hot-electron effects in metal-oxide-silicon field-effect transistors (MOSFETs). Various numerical results on two- and three-dimensional MOSFET geometries are presented as well.

  16. Benchmarking algorithms for the solution of Collisional Radiative Model (CRM) equations.

    NASA Astrophysics Data System (ADS)

    Klapisch, Marcel; Busquet, Michel

    2007-11-01

    Elements used in ICF target designs can have many charge states in the same plasma conditions, each charge state having numerous energy levels. When LTE conditions are not met, one has to solve CRM equations for the populations of energy levels, which are necessary for opacities/emissivities, Z* etc. In case of sparse spectra, or when configuration interaction is important (open d or f shells), statistical methods[1] are insufficient. For these cases one must resort to a detailed level CRM rate generator[2]. The equations to be solved may involve tens of thousands of levels. The system is by nature ill conditioned. We show that some classical methods do not converge. Improvements of the latter will be compared with new algorithms[3] with respect to performance, robustness, and accuracy. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Q. S. R. T.,65, 43 (2000). [2] M Klapisch, M Busquet and A. Bar-Shalom, Proceedings of APIP'07, AIP series (to be published). [3] M Klapisch and M Busquet, High Ener. Density Phys. 3,143 (2007)

  17. Study on adjoint-based optimization method for multi-stage turbomachinery

    NASA Astrophysics Data System (ADS)

    Li, Weiwei; Tian, Yong; Yi, Weilin; Ji, Lucheng; Shao, Weiwei; Xiao, Yunhan

    2011-10-01

    Adjoint-based optimization method is a hotspot in turbomachinery. First, this paper presents principles of adjoint method from Lagrange multiplier viewpoint. Second, combining a continuous route with thin layer RANS equations, we formulate adjoint equations and anti-physical boundary conditions. Due to the multi-stage environment in turbomachinery, an adjoint interrow mixing method is introduced. Numerical techniques of solving flow equations and adjoint equations are almost the same, and once they are converged respectively, the gradients of an objective function to design variables can be calculated using complex method efficiently. Third, integrating a shape perturbation parameterization and a simple steepest descent method, a frame of adjoint-based aerodynamic shape optimization for multi-stage turbomachinery is constructed. At last, an inverse design of an annular cascade is employed to validate the above approach, and adjoint field of an Aachen 1.5 stage turbine demonstrates the conservation and areflexia of the adjoint interrow mixing method. Then a direct redesign of a 1+1 counter-rotating turbine aiming to increase efficiency and apply constraints to mass flow rate and pressure ratio is taken.

  18. On rational R-matrices with adjoint SU(n) symmetry

    NASA Astrophysics Data System (ADS)

    Stronks, Laurens; van de Leur, Johan; Schuricht, Dirk

    2016-11-01

    Using the representation theory of Yangians we construct the rational R-matrix which takes values in the adjoint representation of SU(n). From this we derive an integrable SU(n) spin chain with lattice spins transforming under the adjoint representation. However, the resulting Hamiltonian is found to be non-Hermitian. Dedicated to the memory of Petr Petrovich Kulish.

  19. Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods

    SciTech Connect

    Kiedrowski, Brian C; Brown, Forrest B

    2010-01-01

    Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.

  20. Nonlinear self-adjointness and conservation laws for a porous medium equation with absorption

    NASA Astrophysics Data System (ADS)

    Gandarias, M. L.; Bruzón, M. S.

    2013-10-01

    We give conditions for a general porous medium equation to be nonlinear self-adjoint. By using the property of nonlinear self-adjointness we construct some conservation laws associated with classical and nonclassical generators of a porous medium equation with absorption.

  1. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  2. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  3. Integrated algorithms for RFID-based multi-sensor indoor/outdoor positioning solutions

    NASA Astrophysics Data System (ADS)

    Zhu, Mi.; Retscher, G.; Zhang, K.

    2011-12-01

    Position information is very important as people need it almost everywhere all the time. However, it is a challenging task to provide precise positions indoor/outdoor seamlessly. Outdoor positioning has been widely studied and accurate positions can usually be achieved by well developed GPS techniques but these techniques are difficult to be used indoors since GPS signal reception is limited. The alternative techniques that can be used for indoor positioning include, to name a few, Wireless Local Area Network (WLAN), bluetooth and Ultra Wideband (UWB) etc.. However, all of these have limitations. The main objectives of this paper are to investigate and develop algorithms for a low-cost and portable indoor personal positioning system using Radio Frequency Identification (RFID) and its integration with other positioning systems. An RFID system consists of three components, namely a control unit, an interrogator and a transponder that transmits data and communicates with the reader. An RFID tag can be incorporated into a product, animal or person for the purpose of identification and tracking using radio waves. In general, for RFID positioning in urban and indoor environments three different methods can be used, including cellular positioning, trilateration and location fingerprinting. In addition, the integration of RFID with other technologies is also discussed in this paper. A typical combination is to integrate RFID with relative positioning technologies such as MEMS INS to bridge the gaps between RFID tags for continuous positioning applications. Experiments are shown to demonstrate the improvements of integrating multiple sensors with RFID which can be employed successfully for personal positioning.

  4. Virtual Seismometer and Adjoint Methods for Induced Seismicity Monitoring

    NASA Astrophysics Data System (ADS)

    Morency, C.; Matzel, E.

    2014-12-01

    Induced seismicity is associated with subsurface fluid injection, and puts at risk efforts to develop geologic carbon sequestration and enhanced geothermal systems. We are developing methods to monitor the microseismically active zone so that we can identify faults at risk of slipping. We are using the Virtual Seismometer Method (VSM), which is an interferometric technique that is very sensitive to the source parameters (location, mechanism and magnitude) and to the earth structure in the source region. Given an ideal geometry, that is, when two quakes are roughly in line with a recording station, the correlation of their waveforms provide a precise estimate of the Green's function between them, modified by their source mechanisms. When measuring microseismicity, this geometry is rarely ideal and we need to account for variations in the geometry as well. In addition, we also investigate the adjoint method to calculate sensitivity kernels, which define the sensitivity of an observable to model parameters. Classically, adjoint tomography relies on the interaction between a forward waveform, from the source to the recording station, and a backpropagated waveform, from the recorded station to the source. By combining the two approaches we can focus on properties directly between induced micro events, and doing so, monitor the evolution of the seismicity and precisely image potential fault zones. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. Essential self-adjointness of the graph-Laplacian

    NASA Astrophysics Data System (ADS)

    Jorgensen, Palle E. T.

    2008-07-01

    We study the operator theory associated with such infinite graphs G as occur in electrical networks, in fractals, in statistical mechanics, and even in internet search engines. Our emphasis is on the determination of spectral data for a natural Laplace operator associated with the graph in question. This operator Δ will depend not only on G but also on a prescribed positive real valued function c defined on the edges in G. In electrical network models, this function c will determine a conductance number for each edge. We show that the corresponding Laplace operator Δ is automatically essential self-adjoint. By this we mean that Δ is defined on the dense subspace D (of all the real valued functions on the set of vertices G0 with finite support) in the Hilbert space l2(G0). The conclusion is that the closure of the operator Δ is self-adjoint in l2(G0), and so, in particular, that it has a unique spectral resolution, determined by a projection valued measure on the Borel subsets of the infinite half-line. We prove that generically our graph Laplace operator Δ =Δc will have continuous spectrum. For a given infinite graph G with conductance function c, we set up a system of finite graphs with periodic boundary conditions such the finite spectra, for an ascending family of finite graphs, will have the Laplace operator for G as its limit.

  6. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge

  7. SOLA-VOF: a solution algorithm for transient fluid flow with multiple free boundaries

    SciTech Connect

    Nichols, B.D.; Hirt, C.W.; Hotchkiss, R.S.

    1980-08-01

    In this report a simple, but powerful, computer program is presented for the solution of two-dimensional transient fluid flow with free boundaries. The SOLA-VOF program, which is based on the concept of a fractional volume of fluid (VOF), is more flexible and efficient than other methods for treating arbitrary free boundaries. SOLA-VOF has a variety of user options that provide capabilities for a wide range of applications. Its basic mode of operation is for single fluid calculations having multiple free surfaces. However, SOLA-VOF can also be used for calculations involving two fluids separated by a sharp interface. In either case, the fluids may be treated as incompressible or as having limited compressibility. Surface tension forces with wall adhesion are permitted in both cases. Internal obstacles may be defined by blocking out any desired combination of cells in the mesh, which is composed of rectangular cells of variable size. SOLA-VOF is an easy-to-use program. Its logical parts are isolated in separate subroutines, and numerous special features have been included to simplify its operation, such as an automatic time-step control, a flexible mesh generator, extensive output capabilities, a variety of optional boundary conditions, and instructive internal documentation.

  8. Adjoint-state inversion of electric resistivity tomography data of seawater intrusion at the Argentona coastal aquifer (Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián

    2016-04-01

    Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.

  9. Algorithmic solution for autonomous vision-based off-road navigation

    NASA Astrophysics Data System (ADS)

    Kolesnik, Marina; Paar, Gerhard; Bauer, Arnold; Ulm, Michael

    1998-07-01

    A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.

  10. Joint inversion of seismic velocities and source location without rays using the truncated Newton and the adjoint-state method

    NASA Astrophysics Data System (ADS)

    Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.

    2013-12-01

    Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is

  11. Reciprocal Grids: A Hierarchical Algorithm for Computing Solution X-ray Scattering Curves from Supramolecular Complexes at High Resolution.

    PubMed

    Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri

    2016-08-22

    In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data. PMID:27410762

  12. I-BIEM, an iterative boundary integral equation method for computer solutions of current distribution problems with complex boundaries: A new algorithm. I - Theoretical

    NASA Technical Reports Server (NTRS)

    Cahan, B. D.; Scherson, Daniel; Reid, Margaret A.

    1988-01-01

    A new algorithm for an iterative computation of solutions of Laplace's or Poisson's equations in two dimensions, using Green's second identity, is presented. This algorithm converges strongly and geometrically and can be applied to curved, irregular, or moving boundaries with nonlinear and/or discontinuous boundary conditions. It has been implemented in Pascal on a number of micro- and minicomputers and applied to several geometries. Cases with known analytic solutions have been tested. Convergence to within 0.1 percent to 0.01 percent of the theoretical values are obtained in a few minutes on a microcomputer.

  13. Adjoint ITS calculations using the CEPXS electron-photon cross sections

    SciTech Connect

    Lorence, L.J.; Kensek, R.P.; Halbleib, J.A.

    1995-12-31

    Continuous-energy Monte Carlo Codes are not generally suited for adjoint coupled electron-photon transport. Line radiation (e.g., fluorescence) is especially difficult to implement in adjoint mode with continuous-energy codes. The only published work on adjoint electron Monte Carlo transport is Jordan. The adjoint capability of his NOVICE code is expedited by a multigroup approximation. More recently, a Boltzmann-Fokker-Planck (BFP) Monte Carlo technique has been developed for adjoint electron transport. As in NOVICE, particle transport with BFP Monte Carlo is neither entirely continuous energy nor entirely multigroup. The BFP method has been tested in the multigroup version of MCNP and is being integrated into the ITS code package. Multigroup data produced by the CEPXS cross-section-generating code is needed to operate the BFP codes in adjoint electron-photon mode. In this paper, we present adjoint electron-photon transport results obtained with a new version of CEPXS and a new multigroup version of ITS.

  14. Efficiency of a POD-based reduced second-order adjoint model in 4D-Var data assimilation

    NASA Astrophysics Data System (ADS)

    Daescu, D. N.; Navon, I. M.

    2007-02-01

    Order reduction strategies aim to alleviate the computational burden of the four-dimensional variational data assimilation by performing the optimization in a low-order control space. The proper orthogonal decomposition (POD) approach to model reduction is used to identify a reduced-order control space for a two-dimensional global shallow water model. A reduced second-order adjoint (SOA) model is developed and used to facilitate the implementation of a Hessian-free truncated-Newton (HFTN) minimization algorithm in the POD-based space. The efficiency of the SOA/HFTN implementation is analysed by comparison with the quasi-Newton BFGS and a nonlinear conjugate gradient algorithm. Several data assimilation experiments that differ only in the optimization algorithm employed are performed in the reduced control space. Numerical results indicate that first-order derivative methods are effective during the initial stages of the assimilation; in the later stages, the use of second-order derivative information is of benefit and HFTN provided significant CPU time savings when compared to the BFGS and CG algorithms. A comparison with data assimilation experiments in the full model space shows that with an appropriate selection of the basis functions the optimization in the POD space is able to provide accurate results at a reduced computational cost. The HFTN algorithm benefited most from the order reduction since computational savings were achieved both in the outer and inner iterations of the method. Further experiments are required to validate the approach for comprehensive global circulation models.

  15. Mass anomalous dimension in SU(2) with two adjoint fermions

    SciTech Connect

    Bursa, Francis; Del Debbio, Luigi; Keegan, Liam; Pica, Claudio; Pickup, Thomas

    2010-01-01

    We study SU(2) lattice gauge theory with two flavors of Dirac fermions in the adjoint representation. We measure the running of the coupling in the Schroedinger functional scheme and find it is consistent with existing results. We discuss how systematic errors affect the evidence for an infrared fixed point (IRFP). We present the first measurement of the running of the mass in the Schroedinger functional scheme. The anomalous dimension of the chiral condensate, which is relevant for phenomenological applications, can be easily extracted from the running of the mass, under the assumption that the theory has an IRFP. At the current level of accuracy, we can estimate 0.05<{gamma}<0.56 at the IRFP.

  16. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  17. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  18. Admitting the Inadmissible: Adjoint Formulation for Incomplete Cost Functionals in Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Salas, Manuel D.

    1997-01-01

    We derive the adjoint equations for problems in aerodynamic optimization which are improperly considered as "inadmissible." For example, a cost functional which depends on the density, rather than on the pressure, is considered "inadmissible" for an optimization problem governed by the Euler equations. We show that for such problems additional terms should be included in the Lagrangian functional when deriving the adjoint equations. These terms are obtained from the restriction of the interior PDE to the control surface. Demonstrations of the explicit derivation of the adjoint equations for "inadmissible" cost functionals are given for the potential, Euler, and Navier-Stokes equations.

  19. Adjoint-Based Methods for Estimating CO2 Sources and Sinks from Atmospheric Concentration Data

    NASA Technical Reports Server (NTRS)

    Andrews, Arlyn E.

    2003-01-01

    Work to develop adjoint-based methods for estimating CO2 sources and sinks from atmospheric concentration data was initiated in preparation for last year's summer institute on Carbon Data Assimilation (CDAS) at the National Center for Atmospheric Research in Boulder, CO. The workshop exercises used the GSFC Parameterized Chemistry and Transport Model and its adjoint. Since the workshop, a number of simulations have been run to evaluate the performance of the model adjoint. Results from these simulations will be presented, along with an outline of challenges associated with incorporating a variety of disparate data sources, from sparse, but highly precise, surface in situ observations to less accurate, global future satellite observations.

  20. MUFACT: An Algorithm for Multiple Factor Analyses of Singular and Nonsingular Data with Orthogonal and Oblique Transformation Solutions

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    1978-01-01

    A general factor analysis computer algorithm is briefly discussed. The algorithm is highly transportable with minimum limitations on the number of observations. Both singular and non-singular data can be analyzed. (Author/JKS)

  1. Full Seismic Waveform Tomography of the Japan region using Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Steptoe, Hamish; Fichtner, Andreas; Rickers, Florian; Trampert, Jeannot

    2013-04-01

    We present a full-waveform tomographic model of the Japan region based on spectral-element wave propagation, adjoint techniques and seismic data from dense station networks. This model is intended to further our understanding of both the complex regional tectonics and the finite rupture processes of large earthquakes. The shallow Earth structure of the Japan region has been the subject of considerable tomographic investigation. The islands of Japan exist in an area of significant plate complexity: subduction related to the Pacific and Philippine Sea plates is responsible for the majority of seismicity and volcanism of Japan, whilst smaller micro-plates in the region, including the Okhotsk, and Okinawa and Amur, part of the larger North America and Eurasia plates respectively, contribute significant local intricacy. In response to the need to monitor and understand the motion of these plates and their associated faults, numerous seismograph networks have been established, including the 768 station high-sensitivity Hi-net network, 84 station broadband F-net and the strong-motion seismograph networks K-net and KiK-net in Japan. We also include the 55 station BATS network of Taiwan. We use this exceptional coverage to construct a high-resolution model of the Japan region from the full-waveform inversion of over 15,000 individual component seismograms from 53 events that occurred between 1997 and 2012. We model these data using spectral-element simulations of seismic wave propagation at a regional scale over an area from 120°-150°E and 20°-50°N to a depth of around 500 km. We quantify differences between observed and synthetic waveforms using time-frequency misfits allowing us to separate both phase and amplitude measurements whilst exploiting the complete waveform at periods of 15-60 seconds. Fréchet kernels for these misfits are calculated via the adjoint method and subsequently used in an iterative non-linear conjugate-gradient optimization. Finally, we employ

  2. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, L.F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  4. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  5. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  6. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  7. A new Green's function Monte Carlo algorithm for the estimation of the derivative of the solution of Helmholtz equation subject to Neumann and mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik

    2016-06-01

    The objective of this paper is the extension and application of a newly-developed Green's function Monte Carlo (GFMC) algorithm to the estimation of the derivative of the solution of the one-dimensional (1D) Helmholtz equation subject to Neumann and mixed boundary conditions problems. The traditional GFMC approach for the solution of partial differential equations subject to these boundary conditions involves "reflecting boundaries" resulting in relatively large computational times. My work, inspired by the work of K.K. Sabelfeld is philosophically different in that there is no requirement for reflection at these boundaries. The underlying feature of this algorithm is the elimination of the use of reflecting boundaries through the use of novel Green's functions that mimic the boundary conditions of the problem of interest. My past work has involved the application of this algorithm to the estimation of the solution of the 1D Laplace equation, the Helmholtz equation and the modified Helmholtz equation. In this work, this algorithm has been adapted to the estimation of the derivative of the solution which is a very important development. In the traditional approach involving reflection, to estimate the derivative at a certain number of points, one has to a priori estimate the solution at a larger number of points. In the case of a one-dimensional problem for instance, to obtain the derivative of the solution at a point, one has to obtain the solution at two points, one on each side of the point of interest. These points have to be close enough so that the validity of the first-order approximation for the derivative operator is justified and at the same time, the actual difference between the solutions at these two points has to be at least an order of magnitude higher than the statistical error in the estimation of the solution, thus requiring a significantly larger number of random-walks than that required for the estimation of the solution. In this new approach

  8. MS S4.03.002 - Adjoint-Based Design for Configuration Shaping

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    This slide presentation discusses a method of inverse design for low sonic boom using adjoint-based gradient computations. It outlines a method for shaping a configuration in order to match a prescribed near-field signature.

  9. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  10. An adjoint method for the calculation of remote sensitivities in supersonic flow

    NASA Astrophysics Data System (ADS)

    Nadarajah, Siva K.; Jameson, Antony; Alonso, Juan

    2006-02-01

    This paper presents an adjoint method for the calculation of remote sensitivities in supersonic flow. The goal is to develop a set of discrete adjoint equations and their corresponding boundary conditions in order to quantify the influence of geometry modifications on the pressure distribution at an arbitrary location within the domain of interest. First, this paper presents the complete formulation and discretization of the discrete adjoint equations. The special treatment of the adjoint boundary condition to obtain remote sensitivities or sensitivities of pressure distributions at points remotely located from the wing surface are discussed. Secondly, we present results that demonstrate the application of the theory to a three-dimensional remote inverse design problem using a low sweep biconvex wing and a highly swept blunt leading edge wing. Lastly, we present results that establish the added benefit of using an objective function that contains the sum of the remote inverse and drag minimization cost functions.

  11. Comparison of the adjoint and adjoint-free 4dVar assimilation of the hydrographic and velocity observations in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Yaremchuk, Max; Martin, Paul; Koch, Andrey; Beattie, Christopher

    2016-01-01

    Performance of the adjoint and adjoint-free 4-dimensional variational (4dVar) data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Assimilating the data into the Navy Coastal Ocean Model (NCOM) has shown that both methods deliver similar reduction of the cost function and demonstrate comparable forecast skill at approximately the same computational expense. The obtained optimal states were, however, significantly different in terms of distance from the background state: application of the adjoint method resulted in a 30-40% larger departure, mostly due to the excessive level of ageostrophic motions in the southern basin of the Sea that was not covered by observations.

  12. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  13. GRASP (GRound-Water Adjunct Sensitivity Program): A computer code to perform post-SWENT (simulator for water, energy, and nuclide transport) adjoint sensitivity analysis of steady-state ground-water flow: Technical report

    SciTech Connect

    Wilson, J.L.; RamaRao, B.S.; McNeish, J.A.

    1986-11-01

    GRASP (GRound-Water Adjunct Senstivity Program) computes measures of the behavior of a ground-water system and the system's performance for waste isolation, and estimates the sensitivities of these measures to system parameters. The computed measures are referred to as ''performance measures'' and include weighted squared deviations of computed and observed pressures or heads, local Darcy velocity components and magnitudes, boundary fluxes, and travel distance and time along travel paths. The sensitivities are computed by the adjoint method and are exact derivatives of the performance measures with respect to the parameters for the modeled system, taken about the assumed parameter values. GRASP presumes steady-state, saturated grondwater flow, and post-processes the results of a multidimensional (1-D, 2-D, 3-D) finite-difference flow code. This document describes the mathematical basis for the model, the algorithms and solution techniques used, and the computer code design. The implementation of GRASP is verified with simple one- and two-dimensional flow problems, for which analytical expressions of performance measures and sensitivities are derived. The linkage between GRASP and multidimensional finite-difference flow codes is described. This document also contains a detailed user's manual. The use of GRASP to evaluate nuclear waste disposal issues has been emphasized throughout the report. The performance measures and their sensitivities can be employed to assist in directing data collection programs, expedite model calibration, and objectively determine the sensitivity of projected system performance to parameters.

  14. Adjoint-field errors in high fidelity compressible turbulence simulations for sound control

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan

    2013-11-01

    A consistent discrete adjoint for high-fidelity discretization of the three-dimensional Navier-Stokes equations is used to quantify the error in the sensitivity gradient predicted by the continuous adjoint method, and examine the aeroacoustic flow-control problem for free-shear-flow turbulence. A particular quadrature scheme for approximating the cost functional makes our discrete adjoint formulation for a fourth-order Runge-Kutta scheme with high-order finite differences practical and efficient. The continuous adjoint-based sensitivity gradient is shown to to be inconsistent due to discretization truncation errors, grid stretching and filtering near boundaries. These errors cannot be eliminated by increasing the spatial or temporal resolution since chaotic interactions lead them to become O (1) at the time of control actuation. Although this is a known behavior for chaotic systems, its effect on noise control is much harder to anticipate, especially given the different resolution needs of different parts of the turbulence and acoustic spectra. A comparison of energy spectra of the adjoint pressure fields shows significant error in the continuous adjoint at all wavenumbers, even though they are well-resolved. The effect of this error on the noise control mechanism is analyzed.

  15. A Generalized Adjoint Approach for Quantifying Reflector Assembly Discontinuity Factor Uncertainties

    SciTech Connect

    Yankov, Artem; Collins, Benjamin; Jessee, Matthew Anderson; Downar, Thomas

    2012-01-01

    Sensitivity-based uncertainty analysis of assembly discontinuity factors (ADFs) can be readily performed using adjoint methods for infinite lattice models. However, there is currently no adjoint-based methodology to obtain uncertainties for ADFs along an interface between a fuel and reflector region. To accommodate leakage effects in a reflector region, a 1D approximation is usually made in order to obtain the homogeneous interface flux required to calculate the ADF. Within this 1D framework an adjoint-based method is proposed that is capable of efficiently calculating ADF uncertainties. In the proposed method the sandwich rule is utilized to relate the covariance of the input parameters of 1D diffusion theory in the reflector region to the covariance of the interface ADFs. The input parameters covariance matrix can be readily obtained using sampling-based codes such as XSUSA or adjoint-based codes such as TSUNAMI. The sensitivity matrix is constructed using a fixed-source adjoint approach for inputs characterizing the reflector region. An analytic approach is then used to determine the sensitivity of the ADFs to fuel parameters using the neutron balance equation. A stochastic approach is used to validate the proposed adjoint-based method.

  16. Plumes, Hotspot & Slabs Imaged by Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Lefebvre, M. P.; Lei, W.; Peter, D. B.; Smith, J. A.; Komatitsch, D.; Tromp, J.

    2015-12-01

    We present the "first generation" global adjoint tomography model based on 3D wave simulations, which is the result of 15 conjugate-gradient iterations with confined transverse isotropy to the upper mantle. Our starting model is the 3D mantle and crustal models S362ANI (Kustowski et al. 2008) and Crust2.0 (Bassin et al. 2000), respectively. We take into account the full nonlinearity of wave propagation in numerical simulations including attenuation (both in forward and adjoint simulations), topography/bathymetry, etc., using the GPU version of the SPECFEM3D_GLOBE package. We invert for crust and mantle together without crustal corrections to avoid any bias in mantle structure. We started with an initial selection of 253 global CMT events within the magnitude range 5.8 ≤ Mw ≤ 7.0 with numerical simulations having resolution down to 27 s combining 30-s body and 60-s surface waves. After the 12th iteration we increased the resolution to 17 s, including higher-frequency body waves as well as going down to 45 s in surface-wave measurements. We run 180-min seismograms and assimilate all minor- and major-arc body and surface waves. Our 15th iteration model update shows a tantalisingly enhanced image of the Tahiti plume as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone, Erebus, etc. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the initial model. Point-spread function tests (Fichtner & Trampert 2011) suggest that we are close to the resolution of continental-scale studies in our global inversions and able to confidently map features, for instance, at the scale of the Yellowstone hotspot. This is a clear consequence of our multi-scale smoothing strategy, in which we define our smoothing operator as a function of the approximate Hessian kernel and smooth our gradients less wherever we have good ray coverage

  17. Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.

    2013-12-01

    The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow

  18. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  19. Adjoint based data assimilation for phase field model using second order information of a posterior distribution

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Nagao, Hiromichi; Yamanaka, Akinori; Tsukada, Yuhki; Koyama, Toshiyuki; Inoue, Junya

    Phase field (PF) method, which phenomenologically describes dynamics of microstructure evolutions during solidification and phase transformation, has progressed in the fields of hydromechanics and materials engineering. How to determine, based on observation data, an initial state and model parameters involved in a PF model is one of important issues since previous estimation methods require too much computational cost. We propose data assimilation (DA), which enables us to estimate the parameters and states by integrating the PF model and observation data on the basis of the Bayesian statistics. The adjoint method implemented on DA not only finds an optimum solution by maximizing a posterior distribution but also evaluates the uncertainty in the estimations by utilizing the second order information of the posterior distribution. We carried out an estimation test using synthetic data generated by the two-dimensional Kobayashi's PF model. The proposed method is confirmed to reproduce the true initial state and model parameters we assume in advance, and simultaneously estimate their uncertainties due to quality and quantity of the data. This result indicates that the proposed method is capable of suggesting the experimental design to achieve the required accuracy.

  20. Simulations of emissivity in passive microwave remote sensing with three-dimensional numerical solutions of Maxwell equations and fast algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lin

    In the first part of the work, we developed coding for large-scale computation to solve 3-dimensional microwave scattering problem. Maxwell integral equations are solved by using MoM with RWG basis functions in conjunction with fast computation algorithms. The cost-effective solutions of parallel and distributed simulation were implemented on a low cost PC cluster, which consists of 32 processors connected to a fast Ethernet switch. More than a million of surface current unknowns were solved at unprecedented speeds. Accurate simulations of emissivities and bistatic coefficients from ocean and soil were achieved. Exponential correlation function and ocean spectrum are implementd for generating soil and ocean surfaces. They have fine scale features with large rms slope. The results were justified by comparison with numerical results from original code, which is based on pulse basis function, and from analytic methods like SPM, and also with experiments. In the second part of the work, fully polarimetric microwave emissions from wind-generated foam-covered ocean surfaces were investigated. The foam is treated as densely packed air bubbles coated with thin seawater coating. The absorption, scattering and extinction coefficients were calculated by Monte Carlo simulations of solutionsof Maxwell equations of a collection of coated particles. The effects of boundary roughness of ocean surfaces were included by using the second-order small perturbation method (SPM) describing the reflection coefficients between foam and ocean. An empirical wave-number spectrum was used to represent the small-scale wind-generated sea surfaces. The theoretical results of four Stokes brightness temperatures with typical parameters of foam in passive remote sensing at 10.8 GHz, 19.0 GHz and 36.5 GHz were illustrated. The azimuth variations of polarimetric brightness temperature were calculated. Emission with various wind speed and foam layer thickness was studied. The results were also compared

  1. Skyrmions in Yang-Mills theories with massless adjoint quarks

    SciTech Connect

    Auzzi, R.; Bolognesi, S.; Shifman, M.

    2008-06-15

    Dynamics of SU(N{sub c}) Yang-Mills theories with N{sub f} adjoint Weyl fermions is quite different from that of SU(N{sub c}) gauge theories with fundamental quarks. The symmetry breaking pattern is SU(N{sub f}){yields}SO(N{sub f}). The corresponding sigma model supports Skyrmions whose microscopic identification is not immediately clear. We address this issue as well as the issue of the Skyrmion stability. The case of N{sub f}=2 had been considered previously. Here we discuss N{sub f}{>=}3. We discuss the coupling between the massless Goldstone bosons and massive composite fermions [with mass O(N{sub c}{sup 0})] from the standpoint of the low-energy chiral sigma model. We derive the Wess-Zumino-Novikov-Witten term and then determine Skyrmion statistics. We also determine their fermion number (mod 2) and observe an abnormal relation between the statistics and the fermion number. This explains the Skyrmion stability. In addition, we consider another microscopic theory--SO(N{sub c}) Yang-Mills with N{sub f} Weyl fermions in the vectorial representation--which has the same chiral symmetry breaking pattern and the same chiral Lagrangian. We discuss distinctive features of these two scenarios.

  2. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  3. Analysis Of Solute Concentration And Concentration Derivative Distribution By Means Of Frameshift Fourier And Other Algorithms Applied To Rayleigh Interferometric And Fresnel Fringe Patterns

    NASA Astrophysics Data System (ADS)

    Rowe, Arthur J.; Jones, S. W.; Thomas, D.; Harding, Stephen E.

    1989-11-01

    The equilibrium distribution of particles dispersed in an aqueous solute situated in a centrifugal accelerative field is routinely studied by means of an optical trace recorded photographically. Rayleigh interferometric fringe patterns have been widely used to give this trace, in which the displacement of the parallel fringes is directly related to particle concentration differences. We have developed a simple but highly efficient frameshift algorithm for automatic interpretation of these patternsl . Results obtained from extensive use and further definition of this algorithm confirm its validity and utility. We have also studied algorithms for the interpretation of Fresnel fringe patterns yielded by an alternative optical system. These more complex patterns involving non parallel fringes can be analysed successfully, subject to certain conditions, with a precision similar to that obtained using Rayleigh interference optics.

  4. The application of the gradient-based adjoint multi-point optimization of single and double shock control bumps for transonic airfoils

    NASA Astrophysics Data System (ADS)

    Mazaheri, K.; Nejati, A.; Chaharlang Kiani, K.; Taheri, R.

    2016-07-01

    A shock control bump (SCB) is a flow control method that uses local small deformations in a flexible wing surface to considerably reduce the strength of shock waves and the resulting wave drag in transonic flows. Most of the reported research is devoted to optimization in a single flow condition. Here, we have used a multi-point adjoint optimization scheme to optimize shape and location of the SCB. Practically, this introduces transonic airfoils equipped with the SCB that are simultaneously optimized for different off-design transonic flight conditions. Here, we use this optimization algorithm to enhance and optimize the performance of SCBs in two benchmark airfoils, i.e., RAE-2822 and NACA-64-A010, over a wide range of off-design Mach numbers. All results are compared with the usual single-point optimization. We use numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm to find the optimum location and shape of the SCB. We show that the application of SCBs may increase the aerodynamic performance of an RAE-2822 airfoil by 21.9 and by 22.8 % for a NACA-64-A010 airfoil compared to the no-bump design in a particular flight condition. We have also investigated the simultaneous usage of two bumps for the upper and the lower surfaces of the airfoil. This has resulted in a 26.1 % improvement for the RAE-2822 compared to the clean airfoil in one flight condition.

  5. New Factorization Techniques and Parallel (log N) Algorithms for Forward Dynamics Solution of Single Closed-Chain Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper parallel 0(log N) algorithms for dynamic simulation of single closed-chain rigid multibody system as specialized to the case of a robot manipulatoar in contact with the environment are developed.

  6. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  7. Adjoint-based airfoil shape optimization in transonic flow

    NASA Astrophysics Data System (ADS)

    Gramanzini, Joe-Ray

    The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.

  8. Generalized adjoint consistent treatment of wall boundary conditions for compressible flows

    NASA Astrophysics Data System (ADS)

    Hartmann, Ralf; Leicht, Tobias

    2015-11-01

    In this article, we revisit the adjoint consistency analysis of Discontinuous Galerkin discretizations of the compressible Euler and Navier-Stokes equations with application to the Reynolds-averaged Navier-Stokes and k- ω turbulence equations. Here, particular emphasis is laid on the discretization of wall boundary conditions. While previously only one specific combination of discretizations of wall boundary conditions and of aerodynamic force coefficients has been shown to give an adjoint consistent discretization, in this article we generalize this analysis and provide a discretization of the force coefficients for any consistent discretization of wall boundary conditions. Furthermore, we demonstrate that a related evaluation of the cp- and cf-distributions is required. The freedom gained in choosing the discretization of boundary conditions without loosing adjoint consistency is used to devise a new adjoint consistent discretization including numerical fluxes on the wall boundary which is more robust than the adjoint consistent discretization known up to now. While this work is presented in the framework of Discontinuous Galerkin discretizations, the insight gained is also applicable to (and thus valuable for) other discretization schemes. In particular, the discretization of integral quantities, like the drag, lift and moment coefficients, as well as the discretization of local quantities at the wall like surface pressure and skin friction should follow as closely as possible the discretization of the flow equations and boundary conditions at the wall boundary.

  9. Assessing the Impact of Observations on Numerical Weather Forecasts Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2012-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. This talk provides a general overview of the adjoint method, including the theoretical basis and practical implementation of the technique. Results are presented from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. When performed in conjunction with standard observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies may be important for optimizing the use of the current observational network and defining requirements for future observing systems

  10. Bracketing to speed convergence illustrated on the von Newmann algorithm for finding a feasible solution to a linear program with a convexity contraint

    SciTech Connect

    Dantzig, G.B.

    1992-10-01

    Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b[sup 0]. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides [cflx b][sup i], then redirected in turn to the next [cflx b][sup i]. Once within the neighborhood of each [cflx b][sup i], a weighted sum of the approximate solutions. [bar x][sup i] yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance [parallel]b - b[sup 0][parallel][sub 2] [le] r. The feasible solution is found in less than 4(m+ 1)[sup 3]/r[sup 2] iterations. The work per iteration is [delta]mn + 2m + n + 9 multiplications plus [delta]mn + m + n + 9 additions or comparisons where [delta] is the density of nonzero coeffients in the matrix.

  11. Bracketing to speed convergence illustrated on the von Newmann algorithm for finding a feasible solution to a linear program with a convexity contraint. Technical report

    SciTech Connect

    Dantzig, G.B.

    1992-10-01

    Analogous to gunners firing trial shots to bracket a target in order to adjust direction and distance, we demonstate that it is sometimes faster not to apply an algorithm directly, but to roughly approximately solve several perturbations of the problem and then combine these rough approximations to get an exact solution. To find a feasible solution to an m-equation linear program with a convexity constraint, the von Neumann Algorithm generates a sequence of approximate solutions which converge very slowly to the right hand side b{sup 0}. However, it can be redirected so that in the first few iterations it is guaranteed to move rapidly towards the neighborhood of one of m + 1 perturbed right hand sides {cflx b}{sup i}, then redirected in turn to the next {cflx b}{sup i}. Once within the neighborhood of each {cflx b}{sup i}, a weighted sum of the approximate solutions. {bar x}{sup i} yields the exact solution of the unperturbed problem where the weights are found by solving a system of m + 1 equations in m + 1 unknowns. It is assumed an r > 0 is given for which the problem is feasible for all right hand sides b whose distance {parallel}b - b{sup 0}{parallel}{sub 2} {le} r. The feasible solution is found in less than 4(m+ 1){sup 3}/r{sup 2} iterations. The work per iteration is {delta}mn + 2m + n + 9 multiplications plus {delta}mn + m + n + 9 additions or comparisons where {delta} is the density of nonzero coeffients in the matrix.

  12. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    NASA Technical Reports Server (NTRS)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  13. Iterative algorithm to compute the maximal and stabilising solutions of a general class of discrete-time Riccati-type equations

    NASA Astrophysics Data System (ADS)

    Dragan, Vasile; Morozan, Toader; Stoica, Adrian-Mihail

    2010-04-01

    In this article an iterative method to compute the maximal solution and the stabilising solution, respectively, of a wide class of discrete-time nonlinear equations on the linear space of symmetric matrices is proposed. The class of discrete-time nonlinear equations under consideration contains, as special cases, different types of discrete-time Riccati equations involved in various control problems for discrete-time stochastic systems. This article may be viewed as an addendum of the work of Dragan and Morozan (Dragan, V. and Morozan, T. (2009), 'A Class of Discrete Time Generalized Riccati Equations', Journal of Difference Equations and Applications, first published on 11 December 2009 (iFirst), doi: 10.1080/10236190802389381) where necessary and sufficient conditions for the existence of the maximal solution and stabilising solution of this kind of discrete-time nonlinear equations are given. The aim of this article is to provide a procedure for numerical computation of the maximal solution and the stabilising solution, respectively, simpler than the method based on the Newton-Kantorovich algorithm.

  14. Adjoint Sensitivity Analysis of a Coupled Groundwater-Surface Water Model

    NASA Astrophysics Data System (ADS)

    Kelley, V. A.

    2013-12-01

    Derivation of the exact equations of Adjoint Sensitivity Analysis for a coupled Groundwater-Surface water model is presented here, with reference to the Stream package in MODFLOW-2005. MODFLOW-2005 offers two distinct packages to simulate river boundary conditions in an aquifer model. They are the RIV (RIVer) Package and the STR (STReam) Package. The STR package simulates a coupled Groundwater and Surface Water flow model. As a result of coupling between the Groundwater and the Surface Water flows, the flows to/from the aquifer depend not just on the river stage and aquifer head at that location (as would happen in the RIV package); but on the river stages and aquifer heads at all upstream locations, in the complex network of streams with all its distributaries and diversions. This requires a substantial modification of the adjoint state equations (not required in RIV Package). The necessary equations for the STR Package have now been developed and implemented the MODFLOW-ADJOINT Code. The exact STR Adjoint code has been validated by comparing with the results from the parameter perturbation method, for the case of San Pedro Model (USGS) and Northern Arizona Regional Aquifer Model (USGS). When the RIV package is used for the same models, the sensitivity analysis results are incorrect for some nodes, indicating the advantage of using the exact methods of the STR Package in MODFLOW-Adjoint code. This exact analysis has been used for deriving the capture functions in the management of groundwater, subject to the constraints on the depletion of surface water supplies. Capture maps are used for optimal location of the pumping wells, their rates of withdrawals, and their timing. Because of the immense savings in computational times, with this Adjoint strategy, it is feasible to embed the groundwater management problem in a stochastic framework (probabilistic approach) to address the uncertainties in the groundwater model.

  15. Tracking influential haze source areas in North China using an adjoint model, GRAPES-CUACE

    NASA Astrophysics Data System (ADS)

    An, X. Q.; Zhai, S. X.; Jin, M.; Gong, S. L.; Wang, Y.

    2015-08-01

    Based upon the adjoint theory, the adjoint of the aerosol module in the atmospheric chemical modeling system GRAPES-CUACE (Global/Regional Assimilation and PrEdiction System coupled with the CMA Unified Atmospheric Chemistry Environment) was developed and tested for its correctness. Through statistic comparison, BC (black carbon aerosol) concentrations simulated by GRAPES-CUACE were generally consistent with observations from Nanjiao (one urban observation station) and Shangdianzi (one rural observation station) stations. To track the most influential emission-sources regions and the most influential time intervals for the high BC concentration during the simulation period, the adjoint model was adopted to simulate the sensitivity of average BC concentration over Beijing at the highest concentration time point (referred to as the Objective Function) with respect to BC emission amount over Beijing-Tianjin-Hebei region. Four types of regions were selected based on administrative division and sensitivity coefficient distribution. The adjoint model was used to quantify the effects of emission-sources reduction in different time intervals over different regions by one independent simulation. Effects of different emission reduction strategies based on adjoint sensitivity information show that the more influential regions (regions with relatively larger sensitivity coefficients) do not necessarily correspond to the administrative regions, and the influence effectiveness of sensitivity-oriented regions was greater than the administrative divisions. The influence of emissions on the objective function decreases sharply approximately for the pollutants emitted 17-18 h ago in this episode. Therefore, controlling critical emission regions during critical time intervals on the basis of adjoint sensitivity analysis is much more efficient than controlling administrative specified regions during an experiential time period.

  16. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  17. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  18. Dirac lattices, zero-range potentials, and self-adjoint extension

    NASA Astrophysics Data System (ADS)

    Bordag, M.; Muñoz-Castañeda, J. M.

    2015-03-01

    We consider the electromagnetic field in the presence of polarizable point dipoles. In the corresponding effective Maxwell equation these dipoles are described by three dimensional delta function potentials. We review the approaches handling these: the self-adjoint extension, regularization/renormalization and the zero range potential methods. Their close interrelations are discussed in detail and compared with the electrostatic approach which drops the contributions from the self fields. For a homogeneous two dimensional lattice of dipoles we write down the complete solutions, which allow, for example, for an easy numerical treatment of the scattering of the electromagnetic field on the lattice or for investigating plasmons. Using these formulas, we consider the limiting case of vanishing lattice spacing, i.e., the transition to a continuous sheet. For a scalar field and for the TE polarization of the electromagnetic field this transition is smooth and results in the results known from the continuous sheet. Especially for the TE polarization, we reproduce the results known from the hydrodynamic model describing a two dimensional electron gas. For the TM polarization, for polarizability parallel and perpendicular to the lattice, in both cases, the transition is singular. For the parallel polarizability this is surprising and different from the hydrodynamic model. For perpendicular polarizability this is what was known in literature. We also investigate the case when the transition is done with dipoles described by smeared delta function, i.e., keeping a regularization. Here, for TM polarization for parallel polarizability, when subsequently doing the limit of vanishing lattice spacing, we reproduce the result known from the hydrodynamic model. In case of perpendicular polarizability we need an additional renormalization to reproduce the result obtained previously by stepping back from the dipole approximation.

  19. CMT Source Inversions for Massive Data Assimilation in Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lei, W.; Ruan, Y.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Modrak, R. T.; Komatitsch, D.; Song, X.; Liu, Q.; Tromp, J.; Peter, D. B.

    2015-12-01

    Full Waveform Inversion (FWI) is a vital tool for probing the Earth's interior and enhancing our knowledge of the underlying dynamical processes [e.g., Liu et al., 2012]. Using the adjoint tomography method, we have successfully obtained a first-generation global FWI model named M15 [Bozdag et al., 2015]. To achieve higher resolution of the emerging new structural features and to accommodate azimuthal anisotropy and anelasticity in the next-generation model, we expanded our database from 256 to 4,224 earthquakes. Previous studies have shown that ray-theory-based Centroid Moment Tensor (CMT) inversion algorithms can produce systematic biases in earthquake source parameters due to tradeoffs with 3D crustal and mantle heterogeneity [e.g., Hjorleifsdottir et al., 2010]. To reduce these well-known tradeoffs, we performed CMT inversions in our current 3D global model before resuming the structural inversion with the expanded database. Initial source parameters are selected from the global CMT database [Ekstrom et al., 2012], with moment magnitudes ranging from 5.5 to 7.0 and occurring between 1994 and 2015. Data from global and regional networks were retrieved from the IRIS DMC. Synthetic seismograms were generated based on the spectral-element-based seismic wave propagation solver (SPECFEM3D GLOBE) in model M15. We used a source inversion algorithm based on a waveform misfit function while allowing time shifts between data and synthetics to accommodate additional unmodeled 3D heterogeneity [Liu et al., 2004]. To accommodate the large number of earthquakes and time series (more than 10,000,000 records), we implemented a source inversion workflow based on the newly developed Adaptive Seismic Data Format (ASDF) [Krischer, Smith, et al., 2015] and ObsPy [Krischer et al., 2015]. In ASDF, each earthquake is associated with a single file, thereby eliminating I/O bottlenecks in the workflow and facilitating fast parallel processing. Our preliminary results indicate that errors

  20. Global Adjoint Tomography: Combining Big Data with HPC Simulations

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Lefebvre, M. P.; Lei, W.; Peter, D. B.; Smith, J. A.; Komatitsch, D.; Tromp, J.

    2014-12-01

    The steady increase in data quality and the number of global seismographic stations have substantially grown the amount of data available for construction of Earth models. Meanwhile, developments in the theory of wave propagation, numerical methods and HPC systems have enabled unprecedented simulations of seismic wave propagation in realistic 3D Earth models which lead the extraction of more information from data, ultimately culminating in the use of entire three-component seismograms.Our aim is to take adjoint tomography further to image the entire planet which is one of the extreme cases in seismology due to its intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated in inversions. We have started low resolution (T > 27 s, soon will be > 17 s) global inversions with 253 earthquakes for a transversely isotropic crust and mantle model on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D solvers, such as the GPU version of the SPECFEM3D_GLOBE package, will allow us perform higher-resolution (T > 9 s) and longer-duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves to improve imbalanced ray coverage as a result of uneven distribution of sources and receivers on the globe. Our initial results after 10 iterations already indicate several prominent features reported in high-resolution continental studies, such as major slabs (Hellenic, Japan, Bismarck, Sandwich, etc.) and enhancement in plume structures (the Pacific superplume, the Hawaii hot spot, etc.). Our ultimate goal is to assimilate seismic data from more than 6,000 earthquakes within the magnitude range 5.5 ≤ Mw ≤ 7.0. To take full advantage of this data set on ORNL's computational resources, we need a solid framework for managing big data sets during pre-processing (e.g., data requests and quality checks), gradient calculations, and post-processing (e

  1. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    SciTech Connect

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  2. Adjoint sensitivity structures of typhoon DIANMU (2010) based on a global model

    NASA Astrophysics Data System (ADS)

    Kim, S.; Kim, H.; Joo, S.; Shin, H.; Won, D.

    2010-12-01

    Sung-Min Kim1, Hyun Mee Kim1, Sang-Won Joo2, Hyun-Cheol Shin2, DukJin Won2 Department of Atmospheric Sciences, Yonsei University, Seoul, Korea1 Korea Meteorological Administration2 Submitted to AGU 2010 Fall Meeting 13-17 December 2010, San Francisco, CA The path and intensity forecast of typhoons (TYs) depend on the initial condition of the TY itself and surrounding background fields. Because TYs are evolved on the ocean, there are not many observational data available. In this sense, additional observations on the western North Pacific are necessary to get the proper initial condition of TYs. Due to the limited resource of observing facilities, identifying the sensitive regions for the specific forecast aspect in the forecast region of interest will be very beneficial to decide where to deploy additional observations. The additional observations deployed in those sensitive regions are called as the adaptive observations, and the strategies to decide the sensitive regions are called as the adaptive observation strategies. Among the adaptive observation strategies, the adjoint sensitivity represents the gradient of some forecast aspects with respect to the control variables of the model (i.e., initial conditions, boundary conditions, and parameters) (Errico 1997). According to a recent research on the adjoint sensitivity of a TY based on a regional model, the sensitive regions are located horizontally in the right half circle of the TY, and vertically in the lower and upper troposphere near the TY (Kim and Jung 2006). Because the adjoint sensitivity based on a regional model is calculated in a relatively small domain, the adjoint sensitivity structures may be affected by the size and location of the domain. In this study, the adjoint sensitivity distributions for TY DIANMU (2010) based on a global model are investigated. The adjoint sensitivity based on a global model is calculated by using the perturbation forecast (PF) and adjoint PF model of the Unified Model at

  3. Measurements of phase response in an oscillatory reaction and deduction of components of the adjoint eigenvector

    SciTech Connect

    Millett, I.; Vance, W.; Ross, J.

    1999-10-14

    The authors present the first experiments that use the phase response method to determine components of the adjoint eigenvector (of the Jacobian matrix of the linearized system) of an oscillating reaction system. The Briggs-Rauscher reaction was studied near a supercritical Hopf bifurcation. Phase response curves for I{sup {minus}} and Mn{sup 2+} have been determined, and from them corresponding components of the adjoint eigenvector have been deduced. The relative magnitudes and difference in arguments of these components agree reasonably well with those from a reduced model of the Briggs-Rauscher reaction, whereas agreement with results from quenching experiments is mixed.

  4. Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Ganesh; Cornuelle, Bruce D.; Hoteit, Ibrahim

    2013-07-01

    Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004-2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of

  5. The truncated Newton using 1st and 2nd order adjoint-state method: a new approach for traveltime tomography without rays

    NASA Astrophysics Data System (ADS)

    Bretaudeau, F.; Metivier, L.; Brossier, R.; Virieux, J.

    2013-12-01

    Traveltime tomography algorithms generally use ray tracing. The use of rays in tomography may not be suitable for handling very large datasets and perform tomography in very complex media. Traveltime maps can be computed through finite-difference approach (FD) and avoid complex ray-tracing algorithm for the forward modeling (Vidale 1998, Zhao 2004). However, rays back-traced from receiver to source following the gradient of traveltime are still used to compute the Fréchet derivatives. As a consequence, the sensitivity information computed using back-traced rays is not numerically consistent with the FD modeling used (the derivatives are only a rough approximation of the true derivatives of the forward modeling). Leung & Quian (2006) proposed a new approach that avoid ray tracing where the gradient of the misfit function is computed using the adjoint-state method. An adjoint-state variable is thus computed simultaneously for all receivers using a numerical method consistent with the forward modeling, and for the computational cost of one forward modeling. However, in their formulation, the receivers have to be located at the boundary of the investigated model, and the optimization approach is limited to simple gradient-based method (i.e. steepest descent, conjugate gradient) as only the gradient is computed. However, the Hessian operator has an important role in gradient-based reconstruction methods, providing the necessary information to rescale the gradient, correct for illumination deficit and remove artifacts. Leung & Quian (2006) uses LBFGS, a quasi-Newton method that provides an improved estimation of the influence of the inverse Hessian. Lelievre et al. (2011) also proposed a tomography approach in which the Fréchet derivatives are computed directly during the forward modeling using explicit symbolic differentiation of the modeling equations, resulting in a consistent Gauss-Newton inversion. We are interested here in the use of a new optimization approach

  6. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  7. Towards magnetic sounding of the Earth's core by an adjoint method

    NASA Astrophysics Data System (ADS)

    Li, K.; Jackson, A.; Livermore, P. W.

    2013-12-01

    Earth's magnetic field is generated and sustained by the so called geodynamo system in the core. Measurements of the geomagnetic field taken at the surface, downwards continued through the electrically insulating mantle to the core-mantle boundary (CMB), provide important constraints on the time evolution of the velocity, magnetic field and temperature anomaly in the fluid outer core. The aim of any study in data assimilation applied to the Earth's core is to produce a time-dependent model consistent with these observations [1]. Snapshots of these ``tuned" models provide a window through which the inner workings of the Earth's core, usually hidden from view, can be probed. We apply a variational data assimilation framework to an inertia-free magnetohydrodynamic system (MHD) [2]. Such a model is close to magnetostrophic balance [3], to which we have added viscosity to the dominant forces of Coriolis, pressure, Lorentz and buoyancy, believed to be a good approximation of the Earth's dynamo in the convective time scale. We chose to study the MHD system driven by a static temperature anomaly to mimic the actual inner working of Earth's dynamo system, avoiding at this stage the further complication of solving for the time dependent temperature field. At the heart of the models is a time-dependent magnetic field to which the core-flow is enslaved. In previous work we laid the foundation of the adjoint methodology, applied to a subset of the full equations [4]. As an intermediate step towards our ultimate vision of applying the techniques to a fully dynamic mode of the Earth's core tuned to geomagnetic observations, we present the intermediate step of applying the adjoint technique to the inertia-free Navier-Stokes equation in continuous form. We use synthetic observations derived from evolving a geophysically-reasonable magnetic field profile as the initial condition of our MHD system. Based on our study, we also propose several different strategies for accurately

  8. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  9. Using adjoint-based optimization to study wing flexibility in flapping flight

    NASA Astrophysics Data System (ADS)

    Wei, Mingjun; Xu, Min; Dong, Haibo

    2014-11-01

    In the study of flapping-wing flight of birds and insects, it is important to understand the impact of wing flexibility/deformation on aerodynamic performance. However, the large control space from the complexity of wing deformation and kinematics makes usual parametric study very difficult or sometimes impossible. Since the adjoint-based approach for sensitivity study and optimization strategy is a process with its cost independent of the number of input parameters, it becomes an attractive approach in our study. Traditionally, adjoint equation and sensitivity are derived in a fluid domain with fixed solid boundaries. Moving boundary is only allowed when its motion is not part of control effort. Otherwise, the derivation becomes either problematic or too complex to be feasible. Using non-cylindrical calculus to deal with boundary deformation solves this problem in a very simple and still mathematically rigorous manner. Thus, it allows to apply adjoint-based optimization in the study of flapping wing flexibility. We applied the ``improved'' adjoint-based method to study the flexibility of both two-dimensional and three-dimensional flapping wings, where the flapping trajectory and deformation are described by either model functions or real data from the flight of dragonflies. Supported by AFOSR.

  10. Sensitivity analysis of a model of CO2 exchange in tundra ecosystems by the adjoint method

    NASA Technical Reports Server (NTRS)

    Waelbroek, C.; Louis, J.-F.

    1995-01-01

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO2 flux to perturbation in initial conditions, climatic inputs, and model's main parameters describing current seasonal CO2 exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO2 flux is most sensitive to parameters characterizing litter chemical composition and more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO2 exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO2-induced warming is a significant increase in CO2 emission, creating a positive feedback to atmosphreic CO2 accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO2, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO2 flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results.

  11. Sensitivity analysis of a model of CO2 exchange in tundra ecosystems by the adjoint method

    SciTech Connect

    Waelbroek, C.; Louis, J.F. |

    1995-02-01

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO2 flux to perturbation in initial conditions, climatic inputs, and model`s main parameters describing current seasonal CO2 exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO2 flux is most sensitive to parameters characterizing litter chemical composition and more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO2 exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO2-induced warming is a significant increase in CO2 emission, creating a positive feedback to atmosphreic CO2 accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO2, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO2 flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results.

  12. The MASH 1.0 code system: Utilization of morse in the adjoint mode

    SciTech Connect

    Johnson, J.O.; Santoro, R.T.

    1993-06-01

    The Monte Carlo Adjoint Shielding Code System -- MASH 1.0, principally developed at Oak Ridge National Laboratory (ORNL), represents an advanced method of calculating neutron and gamma-ray environments and radiation protection factors for complex shielding configurations by coupling a forward discrete ordinates radiation environment (i.e. air-over-ground) transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. The primary application to date has been to determine the radiation shielding characteristics of armored vehicles exposed to prompt radiation from a nuclear weapon detonation. Other potential applications include analyses of the mission equipment associated with space exploration, the civilian airline industry, and other problems associated with an external neutron and gamma-ray radiation environment. This paper will provide an overview of the MASH 1.0 code system, including the verification, validation, and application to {open_quotes}benchmark{close_quotes} experimental data. Attention will be given to the adjoint Monte Carlo calculation, the use of {open_quotes}in-group{close_quotes} biasing to control the weights of the adjoint particles, and the coupling of a new graphics package for the diagnosis of combinatorial geometry descriptions and visualization of radiation transport results.

  13. A block iterative finite element algorithm for numerical solution of the steady-state, compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1976-01-01

    An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.

  14. Hybrid algorithm for common solution of monotone inclusion problem and fixed point problem and applications to variational inequalities.

    PubMed

    Zhang, Jingling; Jiang, Nan

    2016-01-01

    The aim of this paper is to investigate hybrid algorithm for a common zero point of the sum of two monotone operators which is also a fixed point of a family of countable quasi-nonexpansive mappings. We point out two incorrect proof in paper (Hecai in Fixed Point Theory Appl 2013:11, 2013). Further, we modify and generalize the results of Hecai's paper, in which only a quasi-nonexpansive mapping was considered. In addition, two family of countable quasi-nonexpansive mappings with uniform closeness examples are provided to demonstrate our results. Finally, the results are applied to variational inequalities.

  15. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  16. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  17. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  18. A two-dimensional, finite-element, flux-corrected transport algorithm for the solution of gas discharge problems

    NASA Astrophysics Data System (ADS)

    Georghiou, G. E.; Morrow, R.; Metaxas, A. C.

    2000-10-01

    An improved finite-element flux-corrected transport (FE-FCT) scheme, which was demonstrated in one dimension by the authors, is now extended to two dimensions and applied to gas discharge problems. The low-order positive ripple-free scheme, required to produce a FCT algorithm, is obtained by introducing diffusion to the high-order scheme (two-step Taylor-Galerkin). A self-adjusting variable diffusion coefficient is introduced, which reduces the high-order scheme to the equivalent of the upwind difference scheme, but without the complexities of an upwind scheme in a finite-element setting. Results are presented which show that the high-order scheme reduces to the equivalent of upwinding when the new diffusion coefficient is used. The proposed FCT scheme is shown to give similar results in comparison to a finite-difference time-split FCT code developed by Boris and Book. Finally, the new method is applied for the first time to a streamer propagation problem in its two-dimensional form.

  19. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  20. The continuous adjoint approach to the k-ω SST turbulence model with applications in shape optimization

    NASA Astrophysics Data System (ADS)

    Kavvadias, I. S.; Papoutsis-Kiachagias, E. M.; Dimitrakopoulos, G.; Giannakoglou, K. C.

    2015-11-01

    In this article, the gradient of aerodynamic objective functions with respect to design variables, in problems governed by the incompressible Navier-Stokes equations coupled with the k-ω SST turbulence model, is computed using the continuous adjoint method, for the first time. Shape optimization problems for minimizing drag, in external aerodynamics (flows around isolated airfoils), or viscous losses in internal aerodynamics (duct flows) are considered. Sensitivity derivatives computed with the proposed adjoint method are compared to those computed with finite differences or a continuous adjoint variant based on the frequently used assumption of frozen turbulence; the latter proves the need for differentiating the turbulence model. Geometries produced by optimization runs performed with sensitivities computed by the proposed method and the 'frozen turbulence' assumption are also compared to quantify the gain from formulating and solving the adjoint to the turbulence model equations.

  1. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    PubMed

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  2. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    PubMed

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  3. GEN-HELS: Improving the efficiency of the CRAFT acoustic holography algorithm via an alternative approach to formulation of the complete general solution to the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Chapman, Alexander Lloyd

    Recently, a sound source identification technique called CRAFT was developed as an advance in the state of the art in inverse noise problems. It addressed some limitations associated with nearfield acoustic holography and a few of the issues with inverse boundary element method. This work centers on two critical issues associated with the CRAFT algorithm. Although CRAFT employs the complete general solution associated with the Helmholtz equation, the approach taken to derive those equations results in computational inefficiency when implemented numerically. In this work, a mathematical approach to derivation of the basis equations results in a doubling in efficiency. This formulation of CRAFT is termed general Helmholtz equation, least-squares method (GEN-HELS). Additionally, the numerous singular points present in the gradient of the basis functions are shown here to resolve to finite limits. As a realistic test case, a diesel engine surface pressure and velocity are reconstructed to show the increase in efficiency from CRAFT to GEN-HELS. Keywords: Inverse Numerical Acoustics, Acoustic Holography, Helmholtz Equation, HELS Method, CRAFT Algorithm.

  4. Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2013-01-01

    This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.

  5. Optimal Shape Design of Compact Heat Exchangers Based on Adjoint Analysis of Momentum and Heat Transfer

    NASA Astrophysics Data System (ADS)

    Morimoto, Kenichi; Suzuki, Yuji; Kasagi, Nobuhide

    An adjoint-based shape optimization method of heat exchangers, which takes into account the heat transfer performance with the pressure loss penalty, is proposed, and its effectiveness is examined through a series of numerical simulation. Undulated heat transfer surface is optimized under an isothermal heated condition based on the variational method with the first derivative of the cost function, which is determined by an adjoint analysis of momentum and heat transfer. When applied to a modeled heat-exchanger passage with a pair of oblique wavy walls, the present optimization method refines the duct shape so as to enhance the heat transfer while suppressing the flow separation. It is shown that the j/f factor is further increased by 4% from the best value of the initial obliquely wavy duct. The effects of the initial wave amplitude upon the shape evolution process are also investigated.

  6. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  7. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  8. Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces

    NASA Astrophysics Data System (ADS)

    Mantile, Andrea; Posilicano, Andrea; Sini, Mourad

    2016-07-01

    The theory of self-adjoint extensions of symmetric operators is used to construct self-adjoint realizations of a second-order elliptic differential operator on Rn with linear boundary conditions on (a relatively open part of) a compact hypersurface. Our approach allows to obtain Kreĭn-like resolvent formulae where the reference operator coincides with the "free" operator with domain H2 (Rn); this provides an useful tool for the scattering problem from a hypersurface. Concrete examples of this construction are developed in connection with the standard boundary conditions, Dirichlet, Neumann, Robin, δ and δ‧-type, assigned either on a (n - 1) dimensional compact boundary Γ = ∂ Ω or on a relatively open part Σ ⊂ Γ. Schatten-von Neumann estimates for the difference of the powers of resolvents of the free and the perturbed operators are also proven; these give existence and completeness of the wave operators of the associated scattering systems.

  9. Some results on the dynamics and transition probabilities for non self-adjoint hamiltonians

    SciTech Connect

    Bagarello, F.

    2015-05-15

    We discuss systematically several possible inequivalent ways to describe the dynamics and the transition probabilities of a quantum system when its hamiltonian is not self-adjoint. In order to simplify the treatment, we mainly restrict our analysis to finite dimensional Hilbert spaces. In particular, we propose some experiments which could discriminate between the various possibilities considered in the paper. An example taken from the literature is discussed in detail.

  10. Inequivalence of unitarity and self-adjointness: An example in quantum cosmology

    SciTech Connect

    Lemos, N.A. )

    1990-02-15

    An example of a quantum cosmological model is presented whose dynamics is unitary although the time-dependent Hamiltonian operator fails to be self-adjoint (because it is not defined) for a particular value of {ital t}. The model is shown to be singular, and this disproves a conjecture put forward by Gotay and Demaret to the effect that unitary quantum dynamics in a slow-time'' gauge is always nonsingular.

  11. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution.

    PubMed

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2013-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view.

  12. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  13. Self-adjointness of the Fourier expansion of quantized interaction field Lagrangians

    PubMed Central

    Paneitz, S. M.; Segal, I. E.

    1983-01-01

    Regularity properties significantly stronger than were previously known are developed for four-dimensional non-linear conformally invariant quantized fields. The Fourier coefficients of the interaction Lagrangian in the interaction representation—i.e., evaluated after substitution of the associated quantized free field—is a densely defined operator on the associated free field Hilbert space K. These Fourier coefficients are with respect to a natural basis in the universal cosmos ˜M, to which such fields canonically and maximally extend from Minkowski space-time M0, which is covariantly a submanifold of ˜M. However, conformally invariant free fields over M0 and ˜M are canonically identifiable. The kth Fourier coefficient of the interaction Lagrangian has domain inclusive of all vectors in K to which arbitrary powers of the free hamiltonian in ˜M are applicable. Its adjoint in the rigorous Hilbert space sense is a-k in the case of a hermitian Lagrangian. In particular (k = 0) the leading term in the perturbative expansion of the S-matrix for a conformally invariant quantized field in M0 is a self-adjoint operator. Thus, e.g., if ϕ(x) denotes the free massless neutral scalar field in M0, then ∫M0:ϕ(x)4:d4x is a self-adjoint operator. No coupling constant renormalization is involved here. PMID:16593346

  14. Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1999-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  15. Neural network training by integration of adjoint systems of equations forward in time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1992-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  16. Iterative solution of multiple radiation and scattering problems in structural acoustics using the BL-QMR algorithm

    SciTech Connect

    Malhotra, M.

    1996-12-31

    Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.

  17. Investigating Sensitivity to Saharan Dust in Tropical Cyclone Formation Using Nasa's Adjoint Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel

    2015-01-01

    As tropical cyclones develop from easterly waves coming of the coast of Africa they interact with dust from the Sahara desert. There is a long standing debate over whether this dust inhibits or advances the developing storm and how much influence it has. Dust can surround the storm and absorb incoming solar radiation, cooling the air below. As a result an energy source for the system is potentially diminished, inhibiting growth of the storm. Alternatively dust may interact with clouds through micro-physical processes, for example by causing more moisture to condense, potentially increasing the strength. As a result of climate change, concentrations and amount of dust in the atmosphere will likely change. It it is important to properly understand its effect on tropical storm formation. The adjoint of an atmospheric general circulation model provides a very powerful tool for investigating sensitivity to initial conditions. The National Aeronautics and Space Administration (NASA) has recently developed an adjoint version of the Goddard Earth Observing System version 5 (GEOS-5) dynamical core, convection scheme, cloud model and radiation schemes. This is extended so that the interaction between dust and radiation is also accounted for in the adjoint model. This provides a framework for examining the sensitivity to dust in the initial conditions. Specifically the set up allows for an investigation into the extent to which dust affects cyclone strength through absorption of radiation. In this work we investigate the validity of using an adjoint model for examining sensitivity to dust in hurricane formation. We present sensitivity results for a number of systems that developed during the Atlantic hurricane season of 2006. During this period there was a significant outbreak of Saharan dust and it is has been argued that this outbreak was responsible for the relatively calm season. This period was also covered by an extensive observation campaign. It is shown that the

  18. Investigating sensitivity to Saharan dust in tropical cyclone formation using NASA's adjoint model

    NASA Astrophysics Data System (ADS)

    Holdaway, Daniel

    2015-04-01

    As tropical cyclones develop from easterly waves coming off the coast of Africa they interact with dust from the Sahara desert. There is a long standing debate over whether this dust inhibits or advances the developing storm and how much influence it has. Dust can surround the storm and absorb incoming solar radiation, cooling the air below. As a result an energy source for the system is potentially diminished, inhibiting growth of the storm. Alternatively dust may interact with clouds through micro-physical processes, for example by causing more moisture to condense, potentially increasing the strength. As a result of climate change, concentrations and amount of dust in the atmosphere will likely change. It it is important to properly understand its effect on tropical storm formation. The adjoint of an atmospheric general circulation model provides a very powerful tool for investigating sensitivity to initial conditions. The National Aeronautics and Space Administration (NASA) has recently developed an adjoint version of the Goddard Earth Observing System version 5 (GEOS-5) dynamical core, convection scheme, cloud model and radiation schemes. This is extended so that the interaction between dust and radiation is also accounted for in the adjoint model. This provides a framework for examining the sensitivity to dust in the initial conditions. Specifically the set up allows for an investigation into the extent to which dust affects cyclone strength through absorption of radiation. In this work we investigate the validity of using an adjoint model for examining sensitivity to dust in hurricane formation. We present sensitivity results for a number of systems that developed during the Atlantic hurricane season of 2006. During this period there was a significant outbreak of Saharan dust and it is has been argued that this outbreak was responsible for the relatively calm season. This period was also covered by an extensive observation campaign. It is shown that the

  19. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  20. The topology and geometry of self-adjoint and elliptic boundary conditions for Dirac and Laplace operators

    NASA Astrophysics Data System (ADS)

    Asorey, M.; Ibort, A.; Marmo, G.

    2015-06-01

    The theory of self-adjoint extensions of first- and second-order elliptic differential operators on manifolds with boundary is studied via its most representative instances: Dirac and Laplace operators. The theory is developed by exploiting the geometrical structures attached to them and, by using an adapted Cayley transform on each case, the space {M} of such extensions is shown to have a canonical group composition law structure. The obtained results are compared with von Neumann's theorem characterizing the self-adjoint extensions of densely defined symmetric operators on Hilbert spaces. The 1D case is thoroughly investigated. The geometry of the submanifold of elliptic self-adjoint extensions {M}ellip is studied and it is shown that it is a Lagrangian submanifold of the universal Grassmannian Gr. The topology of {M}ellip is also explored and it is shown that there is a canonical cycle whose dual is the Maslov class of the manifold. Such cycle, called the Cayley surface, plays a relevant role in the study of the phenomena of topology change. Self-adjoint extensions of Laplace operators are discussed in the path integral formalism, identifying a class of them for which both treatments leads to the same results. A theory of dissipative quantum systems is proposed based on this theory and a unitarization theorem for such class of dissipative systems is proved. The theory of self-adjoint extensions with symmetry of Dirac operators is also discussed and a reduction theorem for the self-adjoint elliptic Grassmannian is obtained. Finally, an interpretation of spontaneous symmetry breaking is offered from the point of view of the theory of self-adjoint extensions.

  1. Development and application of the WRFPLUS-Chem online chemistry adjoint and WRFDA-Chem assimilation system

    NASA Astrophysics Data System (ADS)

    Guerrette, J. J.; Henze, D. K.

    2015-02-01

    Here we present the online meteorology and chemistry adjoint and tangent linear model, WRFPLUS-Chem, which incorporates modules to treat boundary layer mixing, emission, aging, dry deposition, and advection of black carbon aerosol. We also develop land surface and surface layer adjoints to account for coupling between radiation and vertical mixing. Model performance is verified against finite difference derivative approximations. A second order checkpointing scheme is created to reduce computational costs and enable simulations longer than six hours. The adjoint is coupled to WRFDA-Chem, in order to conduct a sensitivity study of anthropogenic and biomass burning sources throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. A cost function weighting scheme was devised to increase adjoint sensitivity robustness in future inverse modeling studies. Results of the sensitivity study show that, for this domain and time period, anthropogenic emissions are over predicted, while wildfire emissions are under predicted. We consider the diurnal variation in emission sensitivities to determine at what time sources should be scaled up or down. Also, adjoint sensitivities for two choices of land surface model indicate that emission inversion results would be sensitive to forward model configuration. The tools described here are the first step in conducting four-dimensional variational data assimilation in a coupled meteorology-chemistry model, which will potentially provide new constraints on aerosol precursor emissions and their distributions. Such analyses will be invaluable to assessments of particulate matter health and climate impacts.

  2. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms

    NASA Astrophysics Data System (ADS)

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.

  3. Artificial neural network-genetic algorithm based optimization for the adsorption of methylene blue and brilliant green from aqueous solution by graphite oxide nanoparticle.

    PubMed

    Ghaedi, M; Zeinali, N; Ghaedi, A M; Teimuori, M; Tashkhourian, J

    2014-05-01

    In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH=7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R(2)) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the

  4. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  5. Mapping Emissions that Contribute to Air Pollution Using Adjoint Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Bastien, L. A. J.; Mcdonald, B. C.; Brown, N. J.; Harley, R.

    2014-12-01

    The adjoint of the Community Multiscale Air Quality model (CMAQ) is used to map emissions that contribute to air pollution at receptors of interest. Adjoint tools provide an efficient way to calculate the sensitivity of a model response to a large number of model inputs, a task that would require thousands of simulations using a more traditional forward sensitivity approach. Initial applications of this technique, demonstrated here, are to benzene and directly-emitted diesel particulate matter, for which atmospheric reactions are neglected. Emissions of these pollutants are strongly influenced by light-duty gasoline vehicles and heavy-duty diesel trucks, respectively. We study air quality responses in three receptor areas where populations have been identified as especially susceptible to, and adversely affected by air pollution. Population-weighted air basin-wide responses for each pollutant are also evaluated for the entire San Francisco Bay area. High-resolution (1 km horizontal grid) emission inventories have been developed for on-road motor vehicle emission sources, based on observed traffic count data. Emission estimates represent diurnal, day of week, and seasonal variations of on-road vehicle activity, with separate descriptions for gasoline and diesel sources. Emissions that contribute to air pollution at each receptor have been mapped in space and time using the adjoint method. Effects on air quality of both relative (multiplicative) and absolute (additive) perturbations to underlying emission inventories are analyzed. The contributions of local versus upwind sources to air quality in each receptor area are quantified, and weekday/weekend and seasonal variations in the influence of emissions from upwind areas are investigated. The contribution of local sources to the total air pollution burden within the receptor areas increases from about 40% in the summer to about 50% in the winter due to increased atmospheric stagnation. The effectiveness of control

  6. Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model

    NASA Astrophysics Data System (ADS)

    Tjiputra, J.; Winguth, A.; Polzin, D.

    2004-12-01

    The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.

  7. Discrete vacuum superselection rule in Wightman theory with essentially self-adjoint field operators

    SciTech Connect

    Voronin, A.V.

    1986-07-01

    The main results of earlier work by the author, Sushko, and Khoruzhii describing the algebraic structure of quantum-field systems with (discrete) vacuum superselection rules are generalized to the large class of Wightman theories with essentially self-adjoint field operators (a very strong restriction was imposed on the theory, namely, that the polynomial Op algebra of the Wightman fields /rho/ belongs to the class II, i.e., /rho/ /sub s'/ =/rho/ /sub w'/). It is also shown that the field Op algebra of a Wightman theory with discrete vaccum superselection rule possesses a class II extension.

  8. Intertwining operators for non-self-adjoint Hamiltonians and bicoherent states

    NASA Astrophysics Data System (ADS)

    Bagarello, F.

    2016-10-01

    This paper is devoted to the construction of what we will call exactly solvable models, i.e., of quantum mechanical systems described by an Hamiltonian H whose eigenvalues and eigenvectors can be explicitly constructed out of some minimal ingredients. In particular, motivated by PT-quantum mechanics, we will not insist on any self-adjointness feature of the Hamiltonians considered in our construction. We also introduce the so-called bicoherent states, we analyze some of their properties and we show how they can be used for quantizing a system. Some examples, both in finite and in infinite-dimensional Hilbert spaces, are discussed.

  9. Spatial and angular variation and discretization of the self-adjoint transport operator

    SciTech Connect

    Roberts, R.M.

    1996-03-11

    This mathematical treatise begins with a variational derivation of a second-order, self-adjoint form of the transport equation. Next, a space variational functional whose minimization solves the transport equation is derived. A one-dimensional example is given. Then, {ital S{sub N}} and {ital P{sub N}} discretized functionals are expressed. Next, the surface contributions to the functionals are discretized. Finally, the explicit forms of the {rvec D} and {rvec H} matrices are given for four different geometries: hexahedron, wedge, tetrahedron, and pyramid.

  10. Adjoint transport calculations for sensitivity analysis of the Hiroshima air-over-ground environment

    SciTech Connect

    Broadhead, B.L.; Cacuci, D.G.; Pace, J.V. III

    1984-01-01

    A major effort within the US Dose Reassessment Program is aimed at recalculating the transport of initial nuclear radiation in an air-over-ground environment. This paper is the first report of results from adjoint calculations in the Hiroshima air-over-ground environment. The calculations use a Hiroshima/Nagasaki multi-element ground, ENDF/B-V nuclear data, one-dimensional ANISN flux weighting for neutron and gamma cross sections, a source obtained by two-dimensional hydrodynamic and three-dimensional transport calculations, and best-estimate atmospheric conditions from Japanese sources. 7 references, 2 figures.

  11. Adjoint gamma ray estimation to the surface of a cylinder: analysis of a remote reprocessng facility

    SciTech Connect

    Cramer, S.N.

    1981-07-01

    The next event estimator in the MORSE multigroup Monte Carlo code has been extended to include Klein-Nishina scattering and annihilation radiation from pair production for both the forward and adjoint modes of calculation. A formulation for the solid angle subtended at a point by a cylinder has also been used in the estimator. These procedures have been included in the investigation of the gamma ray environment in the design of a remote fuel reprocessing facility. Calculational results are presented which indicate the validity and efficiency of the developed methods as compared to those in standard use.

  12. Copper-laser oscillator with adjoint-coupled self-filtering injection.

    PubMed

    Chang, J J

    1995-03-15

    A new injection-controlled laser resonator developed to achieve diffraction-limited beam quality for high-gain short-pulse lasers is reported. The resonator is seeded with a short-pulse laser signal by adjoint-coupled injection. The two-times diffraction-limited injection beam is self-filtered through a prepulse cavity propagation to improve its beam quality. The use of a self-imaging unstable resonator diminishes the edge-diffraction-induced beam deterioration. A beam quality of 1.1-1.3 times diffraction limited is achieved throughout the entire 70-ns laser pulse of a 30-W copper-vapor laser. PMID:19859260

  13. Self adjoint extensions of differential operators in application to shape optimization

    NASA Astrophysics Data System (ADS)

    Nazarov, Serguei A.; Sokolowski, Jan

    2003-10-01

    Two approaches are proposed for the modelling of problems with small geometrical defects. The first approach is based on the theory of self adjoint extensions of differential operators. In the second approach function spaces with separated asymptotics and point asymptotic conditions are introduced, and the variational formulation is established. For both approaches the accuracy estimates are derived. Finally, the spectral problems are considered and the error estimates for eigenvalues are given. To cite this article: S.A. Nazarov, J. Sokolowski, C. R. Mecanique 331 (2003).

  14. Technique for Calculating Solution Derivatives With Respect to Geometry Parameters in a CFD Code

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay

    2011-01-01

    A solution has been developed to the challenges of computation of derivatives with respect to geometry, which is not straightforward because these are not typically direct inputs to the computational fluid dynamics (CFD) solver. To overcome these issues, a procedure has been devised that can be used without having access to the mesh generator, while still being applicable to all types of meshes. The basic approach is inspired by the mesh motion algorithms used to deform the interior mesh nodes in a smooth manner when the surface nodes, for example, are in a fluid structure interaction problem. The general idea is to model the mesh edges and nodes as constituting a spring-mass system. Changes to boundary node locations are propagated to interior nodes by allowing them to assume their new equilibrium positions, for instance, one where the forces on each node are in balance. The main advantage of the technique is that it is independent of the volumetric mesh generator, and can be applied to structured, unstructured, single- and multi-block meshes. It essentially reduces the problem down to defining the surface mesh node derivatives with respect to the geometry parameters of interest. For analytical geometries, this is quite straightforward. In the more general case, one would need to be able to interrogate the underlying parametric CAD (computer aided design) model and to evaluate the derivatives either analytically, or by a finite difference technique. Because the technique is based on a partial differential equation (PDE), it is applicable not only to forward mode problems (where derivatives of all the output quantities are computed with respect to a single input), but it could also be extended to the adjoint problem, either by using an analytical adjoint of the PDE or a discrete analog.

  15. An Adjoint-based Method for the Inversion of the Juno and Cassini Gravity Measurements into Wind Fields

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Kaspi, Yohai

    2016-04-01

    During 2016-17, the Juno and Cassini spacecraft will both perform close eccentric orbits of Jupiter and Saturn, respectively, obtaining high-precision gravity measurements for these planets. These data will be used to estimate the depth of the observed surface flows on these planets. All models to date, relating the winds to the gravity field, have been in the forward direction, thus only allowing the calculation of the gravity field from given wind models. However, there is a need to do the inverse problem since the new observations will be of the gravity field. Here, an inverse dynamical model is developed to relate the expected measurable gravity field, to perturbations of the density and wind fields, and therefore to the observed cloud-level winds. In order to invert the gravity field into the 3D circulation, an adjoint model is constructed for the dynamical model, thus allowing backward integration. This tool is used for the examination of various scenarios, simulating cases in which the depth of the wind depends on latitude. We show that it is possible to use the gravity measurements to derive the depth of the winds, both on Jupiter and Saturn, also taking into account measurement errors. Calculating the solution uncertainties, we show that the wind depth can be determined more precisely in the low-to-mid-latitudes. In addition, the gravitational moments are found to be particularly sensitive to flows at the equatorial intermediate depths. Therefore, we expect that if deep winds exist on these planets they will have a measurable signature by Juno and Cassini.

  16. Exact Solutions and Conservation Laws for a New Integrable Equation

    SciTech Connect

    Gandarias, M. L.; Bruzon, M. S.

    2010-09-30

    In this work we study a generalization of an integrable equation proposed by Qiao and Liu from the point of view of the theory of symmetry reductions in partial differential equations. Among the solutions we obtain a travelling wave with decaying velocity and a smooth soliton solution. We determine the subclass of these equations which are quasi-self-adjoint and we get a nontrivial conservation law.

  17. Hilbert-Schmidt Inner Product for an Adjoint Representation of the Quantum Algebra U⌣Q(SU2)

    NASA Astrophysics Data System (ADS)

    Fakhri, Hossein; Nouraddini, Mojtaba

    2015-10-01

    The Jordan-Schwinger realization of quantum algebra U⌣q(su2) is used to construct the irreducible submodule Tl of the adjoint representation in two different bases. The two bases are known as types of irreducible tensor operators of rank l which are related to each other by the involution map. The bases of the submodules are equipped with q-analogues of the Hilbert-Schmidt inner product and it is also shown that the adjoint representation corresponding to one of those submodules is a *-representation.

  18. Calculation of the response of cylindrical targets to collimated beams of particles using one-dimensional adjoint transport techniques. [LMFBR

    SciTech Connect

    Dupree, S. A.

    1980-06-01

    The use of adjoint techniques to determine the interaction of externally incident collimated beams of particles with cylindrical targets is a convenient means of examining a class of problems important in radiation transport studies. The theory relevant to such applications is derived, and a simple example involving a fissioning target is discussed. Results from both discrete ordinates and Monte Carlo transport-code calculations are presented, and comparisons are made with results obtained from forward calculations. The accuracy of the discrete ordinates adjoint results depends on the order of angular quadrature used in the calculation. Reasonable accuracy by using EQN quadratures can be expected from order S/sub 16/ or higher.

  19. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  20. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  1. Sources and processes contributing to nitrogen deposition: an adjoint model analysis applied to biodiversity hotspots worldwide.

    PubMed

    Paulot, Fabien; Jacob, Daniel J; Henze, Daven K

    2013-04-01

    Anthropogenic enrichment of reactive nitrogen (Nr) deposition is an ecological concern. We use the adjoint of a global 3-D chemical transport model (GEOS-Chem) to identify the sources and processes that control Nr deposition to an ensemble of biodiversity hotspots worldwide and two U.S. national parks (Cuyahoga and Rocky Mountain). We find that anthropogenic sources dominate deposition at all continental sites and are mainly regional (less than 1000 km) in origin. In Hawaii, Nr supply is controlled by oceanic emissions of ammonia (50%) and anthropogenic sources (50%), with important contributions from Asia and North America. Nr deposition is also sensitive in complicated ways to emissions of SO2, which affect Nr gas-aerosol partitioning, and of volatile organic compounds (VOCs), which affect oxidant concentrations and produce organic nitrate reservoirs. For example, VOC emissions generally inhibit deposition of locally emitted NOx but significantly increase Nr deposition downwind. However, in polluted boreal regions, anthropogenic VOC emissions can promote Nr deposition in winter. Uncertainties in chemical rate constants for OH + NO2 and NO2 hydrolysis also complicate the determination of source-receptor relationships for polluted sites in winter. Application of our adjoint sensitivities to the representative concentration pathways (RCPs) scenarios for 2010-2050 indicates that future decreases in Nr deposition due to NOx emission controls will be offset by concurrent increases in ammonia emissions from agriculture.

  2. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.

    2015-12-01

    Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  3. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  4. Modeling of HHFW Heating and Current Drive on NSTX with Ray Tracing and Adjoint Techniques

    NASA Astrophysics Data System (ADS)

    Mau, T. K.; Ryan, P. M.; Carter, M. D.; Jaeger, E. F.; Swain, D. W.; Phillips, C. K.; Kaye, S.; LeBlanc, B. P.; Rosenberg, A. L.; Wilson, J. R.; Harvey, R. W.; Bonoli, P.

    2003-12-01

    In recent HHFW current drive experiments on NSTX, relative phase shift of the antenna array was scanned from 30° to 90° to create k∥ spectral peaks between 3 and 8 m-1, for rf power in the 1.1-4.5 MW range. Typical plasma parameters were Ip ˜ 0.5 MA, BT ˜ 0.45 T, and neo ˜ 0.6-3×1019 m-3, and Teo ˜ 0.6-3 keV. In this paper, detailed results from the CURRAY ray tracing code at various time slices of some of the earlier discharges are presented. The complete antenna spectrum is modeled using up to 100 rays with different kφ, and kθ. The rf-driven current is calculated by invoking the adjoint technique that is applicable to toroidal plasmas of all aspect ratios and beta values. In these low β (˜2-3%) discharges, the rf-driven current is peaked on axis and minority ion absorption displays a tendency to increase at lower k∥. Reasonable agreement with inferred results from the voltage measurements has been obtained that points to evidence of current drive, while the calculated power deposition profiles agree very well with the HPRT ray code for these discharges. The use of the adjoint method will become more important in future high β NSTX discharges.

  5. Imaging Earth's Interior based on Spectral-Element and Adjoint Methods (Invited)

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Zhu, H.; Bozdag, E.

    2013-12-01

    We use spectral-element and adjoint methods to iteratively improve 3D tomographic images of Earth's interior, ranging from global to continental to exploration scales. The spectral-element method, a high-order finite-element method with the advantage of a diagonal mass matrix, is used to accurately calculate three-component synthetic seismograms in a complex 3D Earth model. An adjoint method is used to numerically compute Frechét derivatives of a misfit function based on the interaction between the wavefield for a reference Earth model and a wavefield obtained by using time-reversed differences between data and synthetics at all receivers as simultaneous sources. In combination with gradient-based optimization methods, such as a preconditioned conjugate gradient or L-BSGF method, we are able to iteratively improve 3D images of Earth's interior and gradually minimize discrepancies between observed and simulated seismograms. Various misfit functions may be chosen to quantify these discrepancies, such as cross-correlation traveltime differences, frequency-dependent phase and amplitude anomalies as well as full-waveform differences. Various physical properties of the Earth are constrained based on this method, such as elastic wavespeeds, radial anisotropy, shear attenuation and impedance contrasts. We apply this method to study seismic inverse problems at various scales, from global- and continental-scale seismic tomography to exploration-scale full-waveform inversion.

  6. Seeking Energy System Pathways to Reduce Ozone Damage to Ecosystems through Adjoint-based Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Capps, S. L.; Pinder, R. W.; Loughlin, D. H.; Bash, J. O.; Turner, M. D.; Henze, D. K.; Percell, P.; Zhao, S.; Russell, M. G.; Hakami, A.

    2014-12-01

    Tropospheric ozone (O3) affects the productivity of ecosystems in addition to degrading human health. Concentrations of this pollutant are significantly influenced by precursor gas emissions, many of which emanate from energy production and use processes. Energy system optimization models could inform policy decisions that are intended to reduce these harmful effects if the contribution of precursor gas emissions to human health and ecosystem degradation could be elucidated. Nevertheless, determining the degree to which precursor gas emissions harm ecosystems and human health is challenging because of the photochemical production of ozone and the distinct mechanisms by which ozone causes harm to different crops, tree species, and humans. Here, the adjoint of a regional chemical transport model is employed to efficiently calculate the relative influences of ozone precursor gas emissions on ecosystem and human health degradation, which informs an energy system optimization. Specifically, for the summer of 2007 the Community Multiscale Air Quality (CMAQ) model adjoint is used to calculate the location- and sector-specific influences of precursor gas emissions on potential productivity losses for the major crops and sensitive tree species as well as human mortality attributable to chronic ozone exposure in the continental U.S. The atmospheric concentrations are evaluated with 12-km horizontal resolution with crop production and timber biomass data gridded similarly. These location-specific factors inform the energy production and use technologies selected in the MARKet ALlocation (MARKAL) model.

  7. Adjoint S U (5 ) GUT model with T7 flavor symmetry

    NASA Astrophysics Data System (ADS)

    Arbeláez, Carolina; Cárcamo Hernández, A. E.; Kovalenko, Sergey; Schmidt, Iván

    2015-12-01

    We propose an adjoint S U (5 ) GUT model with a T7 family symmetry and an extra Z2⊗Z3⊗Z4⊗Z4'⊗Z12 discrete group that successfully describes the prevailing Standard Model fermion mass and mixing pattern. The observed hierarchy of the charged fermion masses and the quark mixing angles arises from the Z3⊗Z4⊗Z12 symmetry breaking, which occurs near the GUT scale. The light active neutrino masses are generated by type-I and type-III seesaw mechanisms mediated by the fermionic S U (5 ) singlet and the adjoint 24 -plet. The model predicts the effective Majorana neutrino mass parameter of neutrinoless double beta decay to be mβ β=4 and 50 meV for the normal and the inverted neutrino spectra, respectively. We construct several benchmark scenarios, which lead to S U (5 ) gauge coupling unification and are compatible with the known phenomenological constraints originating from the lightness of neutrinos, proton decay, dark matter, etc. These scenarios contain TeV-scale colored fields, which could give rise to a visible signal or be stringently constrained at the LHC.

  8. Neutron noise calculations in a hexagonal geometry and comparison with analytical solutions

    SciTech Connect

    Tran, H. N.; Demaziere, C.

    2012-07-01

    This paper presents the development of a neutronic and kinetic solver for hexagonal geometries. The tool is developed based on the diffusion theory with multi-energy groups and multi-groups of delayed neutron precursors allowing the solutions of forward and adjoint problems of static and dynamic states, and is applicable to both thermal and fast systems with hexagonal geometries. In the dynamic problems, the small stationary fluctuations of macroscopic cross sections are considered as noise sources, and then the induced first order noise is calculated fully in the frequency domain. Numerical algorithms for solving the static and noise equations are implemented with a spatial discretization based on finite differences and a power iterative solution. A coarse mesh finite difference method has been adopted for speeding up the convergence. Since no other numerical tool could calculate frequency-dependent noise in hexagonal geometry, validation calculations have been performed and benchmarked to analytical solutions based on a 2-D homogeneous system with two-energy groups and one-group of delayed neutron precursor, in which point-like perturbations of thermal absorption cross section at central and non-central positions are considered as noise sources. (authors)

  9. Coupling of MASH-MORSE Adjoint Leakages with Space- and Time-Dependent Plume Radiation Sources

    SciTech Connect

    Slater, C.O.

    2001-04-20

    In the past, forward-adjoint coupling procedures in air-over-ground geometry have typically involved forward fluences arising from a point source a great distance from a target or vehicle system. Various processing codes were used to create localized forward fluence files that could be used to couple with the MASH-MORSE adjoint leakages. In recent years, radiation plumes that result from reactor accidents or similar incidents have been modeled by others, and the source space and energy distributions as a function of time have been calculated. Additionally, with the point kernel method, they were able to calculate in relatively quick fashion free-field radiation doses for targets moving within the fluence field or for stationary targets within the field, the time dependence for the latter case coming from the changes in position, shape, source strength, and spectra of the plume with time. The work described herein applies the plume source to the MASH-MORSE coupling procedure. The plume source replaces the point source for generating the forward fluences that are folded with MASH-MORSE adjoint leakages. Two types of source calculations are described. The first is a ''rigorous'' calculation using the TORT code and a spatially large air-over-ground geometry. For each time step desired, directional fluences are calculated and are saved over a predetermined region that encompasses a structure within which it is desired to calculate dose rates. Processing codes then create the surface fluences (which may include contributions from radiation sources that deposit on the roof or plateout) that will be coupled with the MASH-MORSE adjoint leakages. Unlike the point kernel calculations of the free-field dose rates, the TORT calculations in practice include the effects of ground scatter on dose rates and directional fluences, although the effects may be underestimated or overestimated because of the use of necessarily coarse mesh and quadrature in order to reduce computational

  10. Practical fully three-dimensional reconstruction algorithms for diffuse optical tomography.

    PubMed

    Biswas, Samir Kumar; Kanhirodan, Rajan; Vasu, Ram Mohan; Roy, Debasish

    2012-06-01

    We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant.

  11. Utilisation de sources et d'adjoints dragon pour les calculs TRIPOLI

    NASA Astrophysics Data System (ADS)

    Camand, Corentin

    usually non significant. The second method is to use of the adjoint neutron flux calculated by DRAGON as an importance function for Monte Carlo biaising in TRIPOLI. The objective is to improve the figure of merit of the detector response located far away of the neutron source. The neutron source initialisation of a TRIPOLI calculation required to develop the development of a module in DRAGON that generates a list of sources in the TRIPOLI syntaxe, including for each source, its intensity, its position and the energy domain it covers. We tested our method on a complete 17×17 PWR-UOX assembly and on a reduced 3×3 model. We first verified that the DRAGON and TRIPOLI models were consistent in order to ensure that TRIPOLI receives a coherent source distribution. Then we tested the use of DRAGON sources in TRIPOLI with neutron flux and the effective multiplying coefficient (keff). We observe slightly better standard deviations, of an order of 10 pcm, on keff for simulations using DRAGON sources distributions as compared to simulations with less precise initial sources. Flux convergence is also improved. However some incoherence were also observed in the results, some flux converging slower with DRAGON sources when fewer neutrons per batch are considered. In addition, a very large number of sources is too heavy to insert in TRIPOLI. It seems that our method is perfectible in order to improve implementation and convergence. Study of more complex geometries, with less regular sources distributions (for instance using MOX or irradiated fuel) may provide better performances using our method. For biaising TRIPOLI calculations using the DRAGON adjoint flux we created a module that produces importance maps readable by TRIPOLI. We tested our method on a source-detector shielding problem in one dimension. After checking the coherence of DRAGON and TRIPOLI models, we biaised TRIPOLI simulations using the DRAGON adjoint flux, and using INIPOND, the internal biaising option of TRIPOLI. We

  12. New optimality criteria methods - Forcing uniqueness of the adjoint strains by corner-rounding at constraint intersections

    NASA Technical Reports Server (NTRS)

    Rozvany, G. I. N.; Sobieszczanski-Sobieski, J.

    1992-01-01

    In new, iterative continuum-based optimality criteria (COC) methods, the strain in the adjoint structure becomes non-unique if the number of active local constraints is greater than the number of design variables for an element. This brief note discusses the use of smooth envelope functions (SEFs) in overcoming economically computational problems caused by the above non-uniqueness.

  13. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  14. The Orthotran Solution

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    1978-01-01

    A computational algorithm, called the orthotran solution, is developed for determining oblique factor analytic solutions utilizing orthogonal transformation matrices. Selected results from illustrative studies are provided. (Author/JKS)

  15. Unsteadiness in the wake of disks and spheres: Instability, receptivity and control using direct and adjoint global stability analyses

    NASA Astrophysics Data System (ADS)

    Meliga, P.; Chomaz, J.-M.; Sipp, D.

    2009-05-01

    We consider the stability of the steady, axisymmetric wake of a disk and a sphere as a function of the Reynolds number. Both the direct and adjoint eigenvalue problems are solved. The threshold Reynolds numbers and characteristics of the destabilizing modes agree with that documented in previous studies: for both configurations, the first destabilization occurs for a stationary mode of azimuthal wavenumber m=1, and the second destabilization for an oscillating mode of same azimuthal wavenumber. For both geometries, the adjoint mode computation allows us to determine the receptivity of each mode to particular initial conditions or forcing and to define control strategies. We show that the adjoint global mode reaches a maximum amplitude within the recirculating bubble and downstream of the separation point for both the disk and the sphere. In the case of the sphere, the optimal forcing corresponds to a displacement of the separation point along the sphere surface with no tilt of the separation line. However, in the case of the disk, its blunt shape does not allow such displacement and the optimal forcing corresponds to a tilt of the separation line with no displacement of the separation point. As a result, the magnitudes of the adjoint global modes are larger for the sphere than for the disk, showing that the wake of the sphere is more receptive to forcing than the disk. In the case of active control at the boundary through blowing and suction at the body wall, the actuator should be placed close to the separation point, where the magnitude of the adjoint pressure reaches its maximum in the four cases. In the case of passive control, we show that the region of the wake that is most sensitive to local modifications of the linearized Navier-Stokes operator, including base flow alterations, is limited to the recirculating bubble for both geometries and both instability modes. This region may therefore be identified as the intrinsical wavemaker.

  16. Adjoint sensitivity experiments of a meso-beta-scale vortex in the middle reaches of the Yangtze river

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Gao, K.

    2006-03-01

    A relatively independent and small-scale heavy rainfall event occurred to the south of a slow eastward-moving meso-alpha-scale vortex. The analysis shows that a meso-beta-scale system is heavily responsible for the intense precipitation. An attempt to simulate it met with some failures. In view of its small scale, short lifetime and relatively sparse observations at the initial time, an adjoint model was used to examine the sensitivity of the meso-beta-scale vortex simulation with respect to initial conditions. The adjoint sensitivity indicates how small perturbations of initial model variables anywhere in the model domain can influence the central vorticity of the vortex. The largest sensitivity for both the wind and temperature perturbation is located below 700 hPa, especially at the low level. The largest sensitivity for the water vapor perturbation is located below 500 hPa, especially at the middle and low levels. The horizontal adjoint sensitivity for all variables is mainly located toward the upper reaches of the Yangtze River with respect to the simulated meso-beta-scale system in Hunan and Jiangxi provinces with strong locality. The sensitivity shows that warm cyclonic perturbations in the upper reaches can have a great effect on the development of the meso-beta-scale vortex. Based on adjoint sensitivity, forward sensitivity experiments were conducted to identify factors influencing the development of the meso-beta-scale vortex and to explore ways of improving the prediction. A realistic prediction was achieved by using adjoint sensitivity to modify the initial conditions and implanting a warm cyclone at the initial time in the upper reaches of the river with respect to the meso-beta-scale vortex, as is commonly done in tropical cyclone prediction.

  17. Design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    SciTech Connect

    Samba, A.S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  18. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution.

  19. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. PMID:23909919

  20. Analytical solution for the advection-dispersion transport equation in layered media

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The advection-dispersion transport equation with first-order decay was solved analytically for multi-layered media using the classic integral transform technique (CITT). The solution procedure used an associated non-self-adjoint advection-diffusion eigenvalue problem that had the same form and coef...

  1. Modularity and 4D-2D spectral equivalences for large- N gauge theories with adjoint matter

    NASA Astrophysics Data System (ADS)

    Basar, Gökçe; Cherman, Aleksey; Dienes, Keith R.; McGady, David A.

    2016-06-01

    In recent work, we demonstrated that the confined-phase spectrum of non-supersymmetric pure Yang-Mills theory coincides with the spectrum of the chiral sector of a two-dimensional conformal field theory in the large- N limit. This was done within the tractable setting in which the gauge theory is compactified on a three-sphere whose radius is small compared to the strong length scale. In this paper, we generalize these observations by demonstrating that similar results continue to hold even when massless adjoint matter fields are introduced. These results hold for both thermal and (-1) F -twisted partition functions, and collectively suggest that the spectra of large- N confining gauge theories are organized by the symmetries of two-dimensional conformal field theories.

  2. Sensitivity Analysis for Reactor Period Induced by Positive Reactivity Using One-point Adjoint Kinetic Equation

    NASA Astrophysics Data System (ADS)

    Chiba, G.; Tsuji, M.; Narabayashi, T.

    2014-04-01

    In order to better predict a kinetic behavior of a nuclear fission reactor, an improvement of the delayed neutron parameters is essential. The present paper specifies important nuclear data for a reactor kinetics: Fission yield and decay constant data of 86Ge, some bromine isotopes, 94Rb, 98mY and some iodine isotopes. Their importance is quantified as sensitivities with a help of the adjoint kinetic equation, and it is found that they are dependent on an inserted reactivity (or a reactor period). Moreover, dependence of sensitivities on nuclear data files is also quantified using the latest files. Even though the currently evaluated data are used, there are large differences among different data files from a view point of the delayed neutrons.

  3. Pressure estimation from PIV like data of compressible flows by boundary driven adjoint data assimilation

    NASA Astrophysics Data System (ADS)

    Lemke, Mathias; Reiss, Julius; Sesterhenn, Jörn

    2016-06-01

    Particle image velocimetry (PIV) is one of the major tools to measure velocity fields in experiments. However, other flow properties like density or pressure are often of vital interest, but usually cannot be measured non-intrusively. There are many approaches to overcome this problem, but none is fully satisfactory. Here the computational method of an adjoint based data assimilation for this purpose is discussed. A numerical simulation of a flow is adapted to given velocity data. After successful adaption, previously unknown quantities can be taken from the - necessarily complete - simulation data. The main focus of this work is the efficient implementation of this approach by boundary driven optimisation. Synthetic test cases are presented to allow an assessment of the method.

  4. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  5. Adjoint-Based Methodology for Time-Dependent Optimal Control (AMTOC)

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail; Diskin, boris; Nishikawa, Hiroaki

    2012-01-01

    During the five years of this project, the AMTOC team developed an adjoint-based methodology for design and optimization of complex time-dependent flows, implemented AMTOC in a testbed environment, directly assisted in implementation of this methodology in the state-of-the-art NASA's unstructured CFD code FUN3D, and successfully demonstrated applications of this methodology to large-scale optimization of several supersonic and other aerodynamic systems, such as fighter jet, subsonic aircraft, rotorcraft, high-lift, wind-turbine, and flapping-wing configurations. In the course of this project, the AMTOC team has published 13 refereed journal articles, 21 refereed conference papers, and 2 NIA reports. The AMTOC team presented the results of this research at 36 international and national conferences, meeting and seminars, including International Conference on CFD, and numerous AIAA conferences and meetings. Selected publications that include the major results of the AMTOC project are enclosed in this report.

  6. Aerodynamic Design Optimization on Unstructured Grids with a Continuous Adjoint Formulation

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Venkatakrishnan, V.

    1997-01-01

    A continuous adjoint approach for obtaining sensitivity derivatives on unstructured grids is developed and analyzed. The derivation of the costate equations is presented, and a second-order accurate discretization method is described. The relationship between the continuous formulation and a discrete formulation is explored for inviscid, as well as for viscous flow. Several limitations in a strict adherence to the continuous approach are uncovered, and an approach that circumvents these difficulties is presented. The issue of grid sensitivities, which do not arise naturally in the continuous formulation, is investigated and is observed to be of importance when dealing with geometric singularities. A method is described for modifying inviscid and viscous meshes during the design cycle to accommodate changes in the surface shape. The accuracy of the sensitivity derivatives is established by comparing with finite-difference gradients and several design examples are presented.

  7. Adjoint-tomography Inversion of the Small-scale Surface Sedimentary Structures: Key Methodological Aspects

    NASA Astrophysics Data System (ADS)

    Kubina, Filip; Moczo, Peter; Kristek, Jozef; Michlik, Filip

    2016-04-01

    Adjoint tomography has proven an irreplaceable useful tool in exploring Earth's structure in the regional and global scales. It has not been widely applied for improving models of local surface sedimentary structures (LSSS) in numerical predictions of earthquake ground motion (EGM). Anomalous earthquake motions and corresponding damage in earthquakes are often due to site effects in local surface sedimentary basins. Because majority of world population is located atop surface sedimentary basins, it is important to predict EGM at these sites during future earthquakes. A major lesson learned from dedicated international tests focused on numerical prediction of EGM in LSSS is that it is hard to reach better agreement between data and synthetics without an improved structural model. If earthquake records are available for sites atop a LSSS it is natural to consider them for improving the structural model. Computationally efficient adjoint tomography might be a proper tool. A seismic wavefield in LSSS is relatively very complex due to diffractions, conversions, interference and often also resonant phenomena. In shallow basins, the first arrivals are not suitable for inversion due to almost vertical incidence and thus insufficient vertical resolution. Later wavefield consists mostly of local surface waves often without separated wave groups. Consequently, computed kernels are complicated and not suitable for inversion without pre-processing. The spatial complexity of a kernel can be dramatic in a typical situation with relatively low number of sources (local earthquakes) and surface receivers. This complexity can be simplified by directionally-dependent smoothing and spatially-dependent normalization that condition reasonable convergence. A multiscale approach seems necessary given the usual difference between the available and true models. Interestingly, only a successive inversion of μ and λ elastic moduli, and different scale sequences lead to good results.

  8. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  9. Adjoint QCD on ℝ3 × S 1 with twisted fermionic boundary conditions

    NASA Astrophysics Data System (ADS)

    Misumi, Tatsuhiro; Kanazawa, Takuya

    2014-06-01

    We investigate QCD with adjoint Dirac fermions on ℝ3 × S 1 with generic boundary conditions for fermions along S 1. By means of perturbation theory, semiclassical methods and a chiral effective model, we elucidate a rich phase structure in the space spanned by the S 1 compactification scale L, twisted fermionic boundary condition ϕ and the fermion mass m. We found various phases with or without chiral and center symmetry breaking, separated by first- and second-order phase transitions, which in specific limits ( ϕ = 0, ϕ = π, L → 0 and m → ∞) reproduce known results in the literature. In the center- symmetric phase at small L, we show that Ünsal's bion-induced confinement mechanism is at work but is substantially weakened at ϕ = 0 by a linear potential between monopoles. Through an analytic and numerical study of the PNJL model, we show that the order parameters for center and chiral symmetries (i.e., Polyakov loop and chiral condensate) are strongly intertwined at ϕ = 0. Due to this correlation, a deconfined phase can intervene between a weak-coupling center-symmetric phase at small L and a strong-coupling one at large L. Whether this happens or not depends on the ratio of the dynamical fermion mass to the energy scale of the Yang-Mills theory. Implication of this possibility for resurgence in gauge theories is briefly discussed. In an appendix, we study the index of the adjoint Dirac operator on ℝ3 × S 1 with twisted boundary conditions, which is important for semiclassical analysis of monopoles.

  10. On the formulation of sea-ice models. Part 2: Lessons from multi-year adjoint sea-ice export sensitivities through the Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Heimbach, Patick; Menemenlis, Dimitris; Losch, Martin; Campin, Jean-Michel; Hill, Chris

    The adjoint of an ocean general circulation model is at the heart of the ocean state estimation system of the Estimating the Circulation and Climate of the Ocean (ECCO) project. As part of an ongoing effort to extend ECCO to a coupled ocean/sea-ice estimation system, a dynamic and thermodynamic sea-ice model has been developed for the Massachusetts Institute of Technology general circulation model (MITgcm). One key requirement is the ability to generate, by means of automatic differentiation (AD), tangent linear (TLM) and adjoint (ADM) model code for the coupled MITgcm ocean/sea-ice system. This second part of a two-part paper describes aspects of the adjoint model. The adjoint ocean and sea-ice model is used to calculate transient sensitivities of solid (ice and snow) freshwater export through Lancaster Sound in the Canadian Arctic Archipelago (CAA). The adjoint state provides a complementary view of the dynamics. In particular, the transient, multi-year sensitivity patterns reflect dominant pathways and propagation timescales through the CAA as resolved by the model, thus shedding light on causal relationships, in the model, across the Archipelago. The computational cost of inferring such causal relationships from forward model diagnostics alone would be prohibitive. The role of the exact model trajectory around which the adjoint is calculated (and therefore of the exactness of the adjoint) is exposed through calculations using free-slip vs no-slip lateral boundary conditions. Effective ice thickness, sea surface temperature, and precipitation sensitivities, are discussed in detail as examples of the coupled sea-ice/ocean and atmospheric forcing control space. To test the reliability of the adjoint, finite-difference perturbation experiments were performed for each of these elements and the cost perturbations were compared to those "predicted" by the adjoint. Overall, remarkable qualitative and quantitative agreement is found. In particular, the adjoint correctly

  11. Development of an adjoint model of GRAPES-CUACE and its application in tracking influential haze source areas in north China

    NASA Astrophysics Data System (ADS)

    An, Xing Qin; Xian Zhai, Shi; Jin, Min; Gong, Sunling; Wang, Yu

    2016-06-01

    The aerosol adjoint module of the atmospheric chemical modeling system GRAPES-CUACE (Global-Regional Assimilation and Prediction System coupled with the CMA Unified Atmospheric Chemistry Environment) is constructed based on the adjoint theory. This includes the development and validation of the tangent linear and the adjoint models of the three parts involved in the GRAPES-CUACE aerosol module: CAM (Canadian Aerosol Module), interface programs that connect GRAPES and CUACE, and the aerosol transport processes that are embedded in GRAPES. Meanwhile, strict mathematical validation schemes for the tangent linear and the adjoint models are implemented for all input variables. After each part of the module and the assembled tangent linear and adjoint models is verified, the adjoint model of the GRAPES-CUACE aerosol is developed and used in a black carbon (BC) receptor-source sensitivity analysis to track influential haze source areas in north China. The sensitivity of the average BC concentration over Beijing at the highest concentration time point (referred to as the Objective Function) is calculated with respect to the BC amount emitted over the Beijing-Tianjin-Hebei region. Four types of regions are selected based on the administrative division or the sensitivity coefficient distribution. The adjoint sensitivity results are then used to quantify the effect of reducing the emission sources at different time intervals over different regions. It is indicated that the more influential regions (with relatively larger sensitivity coefficients) do not necessarily correspond to the administrative regions. Instead, the influence per unit area of the sensitivity selected regions is greater. Therefore, controlling the most influential regions during critical time intervals based on the results of the adjoint sensitivity analysis is much more efficient than controlling administrative regions during an experimental time period.

  12. Variational data assimilation with a semi-Lagrangian semi-implicit global shallow-water equation model and its adjoint

    NASA Technical Reports Server (NTRS)

    Li, Y.; Navon, I. M.; Courtier, P.; Gauthier, P.

    1993-01-01

    An adjoint model is developed for variational data assimilation using the 2D semi-Lagrangian semi-implicit (SLSI) shallow-water equation global model of Bates et al. with special attention being paid to the linearization of the interpolation routines. It is demonstrated that with larger time steps the limit of the validity of the tangent linear model will be curtailed due to the interpolations, especially in regions where sharp gradients in the interpolated variables coupled with strong advective wind occur, a synoptic situation common in the high latitudes. This effect is particularly evident near the pole in the Northern Hemisphere during the winter season. Variational data assimilation experiments of 'identical twin' type with observations available only at the end of the assimilation period perform well with this adjoint model. It is confirmed that the computational efficiency of the semi-Lagrangian scheme is preserved during the minimization process, related to the variational data assimilation procedure.

  13. Adjoint free four-dimensional variational data assimilation for a storm surge model of the German North Sea

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangyang; Mayerle, Roberto; Xing, Qianguo; Fernández Jaramillo, José Manuel

    2016-08-01

    In this paper, a data assimilation scheme based on the adjoint free Four-Dimensional Variational(4DVar) method is applied to an existing storm surge model of the German North Sea. To avoid the need of an adjoint model, an ensemble-like method to explicitly represent the linear tangent equation is adopted. Results of twin experiments have shown that the method is able to recover the contaminated low dimension model parameters to their true values. The data assimilation scheme was applied to a severe storm surge event which occurred in the North Sea in December 5, 2013. By adjusting wind drag coefficient, the predictive ability of the model increased significantly. Preliminary experiments have shown that an increase in the predictive ability is attained by narrowing the data assimilation time window.

  14. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  15. Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2010-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  16. The study of single station inverting the sea surface current by HF ground wave radar based on adjoint assimilation technology

    NASA Astrophysics Data System (ADS)

    Han, Shuzong; Yang, Hua; Xue, Wenhu; Wang, Xingchi

    2016-10-01

    This paper introduces the assimilation technology in an ocean dynamics model and discusses the feasibility of inverting the sea surface current in the detection zone by assimilating the sea current radial velocity detected by single station HF ground wave radar in ocean dynamics model. Based on the adjoint assimilation and POM model, the paper successfully inverts the sea surface current through single station HF ground wave radar in the Zhoushan sea area. The single station HF radar inversion results are also compared with the bistatic HF radar composite results and the fixed point measured results by Annderaa current meter. The error analysis shows that acquisition of flow velocity and flow direction data from the single station HF radar based on adjoint assimilation and POM model is viable and the data obtained have a high correlation and consistency with the flow field observed by HF radar.

  17. Development and application of the WRFPLUS-Chem online chemistry adjoint and WRFDA-Chem assimilation system

    NASA Astrophysics Data System (ADS)

    Guerrette, J. J.; Henze, D. K.

    2015-06-01

    Here we present the online meteorology and chemistry adjoint and tangent linear model, WRFPLUS-Chem (Weather Research and Forecasting plus chemistry), which incorporates modules to treat boundary layer mixing, emission, aging, dry deposition, and advection of black carbon aerosol. We also develop land surface and surface layer adjoints to account for coupling between radiation and vertical mixing. Model performance is verified against finite difference derivative approximations. A second-order checkpointing scheme is created to reduce computational costs and enable simulations longer than 6 h. The adjoint is coupled to WRFDA-Chem, in order to conduct a sensitivity study of anthropogenic and biomass burning sources throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. A cost-function weighting scheme was devised to reduce the impact of statistically insignificant residual errors in future inverse modeling studies. Results of the sensitivity study show that, for this domain and time period, anthropogenic emissions are overpredicted, while wildfire emission error signs vary spatially. We consider the diurnal variation in emission sensitivities to determine at what time sources should be scaled up or down. Also, adjoint sensitivities for two choices of land surface model (LSM) indicate that emission inversion results would be sensitive to forward model configuration. The tools described here are the first step in conducting four-dimensional variational data assimilation in a coupled meteorology-chemistry model, which will potentially provide new constraints on aerosol precursor emissions and their distributions. Such analyses will be invaluable to assessments of particulate matter health and climate impacts.

  18. Calculating Air Quality and Climate Co-Benefits Metrics from Adjoint Elasticities in Chemistry-Climate Models

    NASA Astrophysics Data System (ADS)

    Spak, S.; Henze, D. K.; Carmichael, G. R.

    2013-12-01

    The science and policy communities both need common metrics that clearly, comprehensively, and intuitively communicate the relative sensitivities of air quality and climate to emissions control strategies, include emissions and process uncertainties, and minimize the range of error that is transferred to the metric. This is particularly important because most emissions control policies impact multiple short-lived climate forcing agents, and non-linear climate and health responses in space and time limit the accuracy and policy value of simple emissions-based calculations. Here we describe and apply new second-order elasticity metrics to support the direct comparison of emissions control policies for air quality and health co-benefits analyses using adjoint chemical transport and chemistry-climate models. Borrowing an econometric concept, the simplest elasticities in the atmospheric system are the percentage changes in concentrations due to a percentage change in the emissions. We propose a second-order elasticity metric, the Emissions Reduction Efficiency, which supports comparison across compounds, to long-lived climate forcing agents like CO2, and to other air quality impacts, at any temporal or spatial scale. These adjoint-based metrics (1) possess a single uncertainty range; (2) allow for the inclusion of related health and other impacts effects within the same framework; (3) take advantage of adjoint and forward sensitivity models; and (4) are easily understood. Using global simulations with the adjoint of GEOS-Chem, we apply these metrics to identify spatial and sectoral variability in the climate and health co-benefits of sectoral emissions controls on black carbon, sulfur dioxide, and PM2.5. We find spatial gradients in optimal control strategies on every continent, along with differences among megacities.

  19. Estimates of black carbon emissions in the western United States using the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Mao, Y. H.; Li, Q. B.; Henze, D. K.; Jiang, Z.; Jones, D. B. A.; Kopacz, M.; He, C.; Qi, L.; Gao, M.; Hao, W.-M.; Liou, K.-N.

    2015-07-01

    We estimate black carbon (BC) emissions in the western United States for July-September 2006 by inverting surface BC concentrations from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network using a global chemical transport model (GEOS-Chem) and its adjoint. Our best estimate of the BC emissions is 49.9 Gg at 2° × 2.5° (a factor of 2.1 increase) and 47.3 Gg at 0.5° × 0.667° (1.9 times increase). Model results now capture the observed major fire episodes with substantial bias reductions ( 35 % at 2° × 2.5° and 15 % at 0.5° × 0.667°). The emissions are 20-50 % larger than those from our earlier analytical inversions (Mao et al., 2014). The discrepancy is especially drastic in the partitioning of anthropogenic versus biomass burning emissions. The August biomass burning BC emissions are 4.6-6.5 Gg and anthropogenic BC emissions 8.6-12.8 Gg, varying with the model resolution, error specifications, and subsets of observations used. On average both anthropogenic and biomass burning emissions in the adjoint inversions increase 2-fold relative to the respective {a priori} emissions, in distinct contrast to the halving of the anthropogenic and tripling of the biomass burning emissions in the analytical inversions. We attribute these discrepancies to the inability of the adjoint inversion system, with limited spatiotemporal coverage of the IMPROVE observations, to effectively distinguish collocated anthropogenic and biomass burning emissions on model grid scales. This calls for concurrent measurements of other tracers of biomass burning and fossil fuel combustion (e.g., carbon monoxide and carbon isotopes). We find that the adjoint inversion system as is has sufficient information content to constrain the total emissions of BC on the model grid scales.

  20. Sensitivity analysis of a model of CO{sub 2} exchange in tundra ecosystems by the adjoint method

    SciTech Connect

    Waelbroeck, C.; Louis, J.F.

    1995-02-20

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO{sub 2} flux to perturbations in initial conditions, climatic inputs, and model`s main parameters describing current seasonal CO{sub 2} exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO{sub 2} flux is more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO{sub 2} exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO{sub 2}-induced warming is a significant increase in CO{sub 2} emission, creating a positive feedback to atmospheric CO{sub 2} accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO{sub 2}, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO{sub 2} flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results. 38 refs., 10 figs., 7 tabs.

  1. A criterion of the continuous spectrum for elasticity and other self-adjoint systems on sharp peak-shaped domains*

    NASA Astrophysics Data System (ADS)

    Nazarov, Sergey A.

    2007-12-01

    The spectra of the elasticity and piezo-electricity systems for a solid with a sharp peak point on the boundary, which is free of traction, are not discrete. An algebraic criterion of non-empty continuous spectrum is found for the Neumann problem for rather arbitrary formally self-adjoint elliptic systems of second-order differential equations on a sharp peak-shaped domain. To cite this article: S.A. Nazarov, C. R. Mecanique 335 (2007).

  2. Adjoint-based computation of U.S. nationwide ozone exposure isopleths

    NASA Astrophysics Data System (ADS)

    Ashok, Akshay; Barrett, Steven R. H.

    2016-05-01

    Population exposure to daily maximum ozone is associated with an increased risk of premature mortality, and efforts to mitigate these impacts involve reducing emissions of nitrogen oxides (NOx) and volatile organic compounds (VOCs). We quantify the dependence of U.S. national exposure to annually averaged daily maximum ozone on ambient VOC and NOx concentrations through ozone exposure isopleths, developed using emissions sensitivities from the adjoint of the GEOS-Chem air quality model for 2006. We develop exposure isopleths for all locations within the contiguous US and derive metrics based on the isopleths that quantify the impact of emissions on national ozone exposure. This work is the first to create ozone exposure isopleths using adjoint sensitivities and at a large scale. We find that across the US, 29% of locations experience VOC-limited conditions (where increased NOx emissions lower ozone) during 51% of the year on average. VOC-limited conditions are approximately evenly distributed diurnally and occur more frequently during the fall and winter months (67% of the time) than in the spring and summer (37% of the time). The VOC/NOx ratio of the ridge line on the isopleth diagram (denoting a local maximum in ozone exposure with respect to NOx concentrations) is 9.2 ppbC/ppb on average across grid cells that experience VOC-limited conditions and 7.9, 10.1 and 6.7 ppbC/ppb at the three most populous US cities of New York, Los Angeles and Chicago, respectively. Emissions that are ozone exposure-neutral during VOC-limited exposure conditions result in VOC/NOx concentration ratios of 0.63, 1.61 and 0.72 ppbC/ppb at each of the three US cities respectively, and between 0.01 and 1.91 ppbC/ppb at other locations. The sensitivity of national ozone exposure to NOx and VOC emissions is found to be highest near major cities in the US. Together, this information can be used to assess the effectiveness of NOx and VOC emission reductions on mitigating ozone exposure in the

  3. NESTLE: Few-group neutron diffusion equation solver utilizing the nodal expansion method for eigenvalue, adjoint, fixed-source steady-state and transient problems

    SciTech Connect

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1994-06-01

    NESTLE is a FORTRAN77 code that solves the few-group neutron diffusion equation utilizing the Nodal Expansion Method (NEM). NESTLE can solve the eigenvalue (criticality); eigenvalue adjoint; external fixed-source steady-state; or external fixed-source. or eigenvalue initiated transient problems. The code name NESTLE originates from the multi-problem solution capability, abbreviating Nodal Eigenvalue, Steady-state, Transient, Le core Evaluator. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two or four energy groups can be utilized, with all energy groups being thermal groups (i.e. upscatter exits) if desired. Core geometries modelled include Cartesian and Hexagonal. Three, two and one dimensional models can be utilized with various symmetries. The non-linear iterative strategy associated with the NEM method is employed. An advantage of the non-linear iterative strategy is that NSTLE can be utilized to solve either the nodal or Finite Difference Method representation of the few-group neutron diffusion equation.

  4. On a Time-Space Operator (and other Non-Self-Adjoint Operators) for Observables in QM and QFT

    NASA Astrophysics Data System (ADS)

    Recami, Erasmo; Zamboni-Rached, Michel; Licata, Ignazio

    The aim of this paper is to show the possible significance, and usefulness, of various non-self-adjoint operators for suitable Observables in non-relativistic and relativistic quantum mechanics (QM), and in quantum electrodynamics. More specifically, this work deals with: (i) the Hermitian (but not self-adjoint) Time operator in non-relativistic QM and in quantum electrodynamics; (ii) idem, the introduction of Time and Space operators; and (iii) the problem of the four-position and four-momentum operators, each one with its Hermitian and anti-Hermitian parts, for relativistic spin-zero particles. Afterwards, other physical applications of non-self-adjoint (and even non-Hermitian) operators are briefly discussed. We mention how non-Hermitian operators can indeed be used in physics [as it was done, elsewhere, for describing Unstable States]; and some considerations are added on the cases of the nuclear optical potential, of quantum dissipation, and in particular of an approach to the measurement problem in QM in terms of a chronon. This paper is largely based on work developed, along the years, in collaboration with V.S. Olkhovsky, and, in smaller parts, with P. Smrz, with R.H.A. Farias, and with S.P. Maydanyuk.

  5. High-resolution mapping of sources contributing to urban air pollution using adjoint sensitivity analysis: benzene and diesel black carbon.

    PubMed

    Bastien, Lucas A J; McDonald, Brian C; Brown, Nancy J; Harley, Robert A

    2015-06-16

    The adjoint of the Community Multiscale Air Quality (CMAQ) model at 1 km horizontal resolution is used to map emissions that contribute to ambient concentrations of benzene and diesel black carbon (BC) in the San Francisco Bay area. Model responses of interest include population-weighted average concentrations for three highly polluted receptor areas and the entire air basin. We consider both summer (July) and winter (December) conditions. We introduce a novel approach to evaluate adjoint sensitivity calculations that complements existing methods. Adjoint sensitivities to emissions are found to be accurate to within a few percent, except at some locations associated with large sensitivities to emissions. Sensitivity of model responses to emissions is larger in winter, reflecting weaker atmospheric transport and mixing. The contribution of sources located within each receptor area to the same receptor's air pollution burden increases from 38-74% in summer to 56-85% in winter. The contribution of local sources is higher for diesel BC (62-85%) than for benzene (38-71%), reflecting the difference in these pollutants' atmospheric lifetimes. Morning (6-9am) and afternoon (4-7 pm) commuting-related emissions dominate region-wide benzene levels in winter (14 and 25% of the total response, respectively). In contrast, afternoon rush hour emissions do not contribute significantly in summer. Similar morning and afternoon peaks in sensitivity to emissions are observed for the BC response; these peaks are shifted toward midday because most diesel truck traffic occurs during off-peak hours. PMID:26001097

  6. High-resolution mapping of sources contributing to urban air pollution using adjoint sensitivity analysis: benzene and diesel black carbon.

    PubMed

    Bastien, Lucas A J; McDonald, Brian C; Brown, Nancy J; Harley, Robert A

    2015-06-16

    The adjoint of the Community Multiscale Air Quality (CMAQ) model at 1 km horizontal resolution is used to map emissions that contribute to ambient concentrations of benzene and diesel black carbon (BC) in the San Francisco Bay area. Model responses of interest include population-weighted average concentrations for three highly polluted receptor areas and the entire air basin. We consider both summer (July) and winter (December) conditions. We introduce a novel approach to evaluate adjoint sensitivity calculations that complements existing methods. Adjoint sensitivities to emissions are found to be accurate to within a few percent, except at some locations associated with large sensitivities to emissions. Sensitivity of model responses to emissions is larger in winter, reflecting weaker atmospheric transport and mixing. The contribution of sources located within each receptor area to the same receptor's air pollution burden increases from 38-74% in summer to 56-85% in winter. The contribution of local sources is higher for diesel BC (62-85%) than for benzene (38-71%), reflecting the difference in these pollutants' atmospheric lifetimes. Morning (6-9am) and afternoon (4-7 pm) commuting-related emissions dominate region-wide benzene levels in winter (14 and 25% of the total response, respectively). In contrast, afternoon rush hour emissions do not contribute significantly in summer. Similar morning and afternoon peaks in sensitivity to emissions are observed for the BC response; these peaks are shifted toward midday because most diesel truck traffic occurs during off-peak hours.

  7. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  8. Development of a High-Order Space-Time Matrix-Free Adjoint Solver

    NASA Technical Reports Server (NTRS)

    Ceze, Marco A.; Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    The growth in computational power and algorithm development in the past few decades has granted the science and engineering community the ability to simulate flows over complex geometries, thus making Computational Fluid Dynamics (CFD) tools indispensable in analysis and design. Currently, one of the pacing items limiting the utility of CFD for general problems is the prediction of unsteady turbulent ows.1{3 Reynolds-averaged Navier-Stokes (RANS) methods, which predict a time-invariant mean flowfield, struggle to provide consistent predictions when encountering even mild separation, such as the side-of-body separation at a wing-body junction. NASA's Transformative Tools and Technologies project is developing both numerical methods and physical modeling approaches to improve the prediction of separated flows. A major focus of this e ort is efficient methods for resolving the unsteady fluctuations occurring in these flows to provide valuable engineering data of the time-accurate flow field for buffet analysis, vortex shedding, etc. This approach encompasses unsteady RANS (URANS), large-eddy simulations (LES), and hybrid LES-RANS approaches such as Detached Eddy Simulations (DES). These unsteady approaches are inherently more expensive than traditional engineering RANS approaches, hence every e ort to mitigate this cost must be leveraged. Arguably, the most cost-effective approach to improve the efficiency of unsteady methods is the optimal placement of the spatial and temporal degrees of freedom (DOF) using solution-adaptive methods.

  9. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  10. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-06-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  11. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  12. Improving NO(x) cap-and-trade system with adjoint-based emission exchange rates.

    PubMed

    Mesbah, S Morteza; Hakami, Amir; Schott, Stephan

    2012-11-01

    Cap-and-trade programs have proven to be effective instruments for achieving environmental goals while incurring minimum cost. The nature of the pollutant, however, affects the design of these programs. NO(x), an ozone precursor, is a nonuniformly mixed pollutant with a short atmospheric lifetime. NO(x) cap-and-trade programs in the U.S. are successful in reducing total NO(x) emissions but may result in suboptimal environmental performance because location-specific ozone formation potentials are neglected. In this paper, the current NO(x) cap-and-trade system is contrasted to a hypothetical NO(x) trading policy with sensitivity-based exchange rates. Location-specific exchange rates, calculated through adjoint sensitivity analysis, are combined with constrained optimization for prediction of NO(x) emissions trading behavior and post-trade ozone concentrations. The current and proposed policies are examined in a case study for 218 coal-fired power plants that participated in the NO(x) Budget Trading Program in 2007. We find that better environmental performance at negligibly higher system-wide abatement cost can be achieved through inclusion of emission exchange rates. Exposure-based exchange rates result in better environmental performance than those based on concentrations. PMID:23050674

  13. Estimates of Asian dust sources using the adjoint of GEOS-Chem

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Park, R.; Ku, B.

    2011-12-01

    Soil dust aerosols, typically originated from northern China, southern Mongolia, and the Taklamakan desert in spring, have large impacts on human health, local visibility, air quality, and climate in Asia. Large uncertainty, however, exists in estimates of dust emissions in 3-D models. We develop the adjoint of dust modeling in a global chemical transport model, GEOS-Chem, using a four-dimensional variational method and apply it to obtain optimized dust sources over East Asia in April 2001 together with surface PM10 aerosol measurements from the Chinese ambient air pollution index, the Korean Ministry of Environment, and the Acid Deposition Monitoring Network. The optimized dust sources from the assimilation show a large decrease in dust emissions over the Gobi Desert. To evaluate the assimilated results, we compare simulated dust aerosol optical depths (AODs) using the optimized sources with the Total Ozone Mapping Spectrometer aerosol index and the Multi-angle Imaging Spectrometer AOD data. We find that the optimized sources result in much better agreement with the observations, especially in the context of improved the spatial distribution of the simulated AOD compared with the observation over East Asia.

  14. Focus point gauge mediation with incomplete adjoint messengers and gauge coupling unification

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Gautam; Yanagida, Tsutomu T.; Yokozaki, Norimi

    2015-10-01

    As the mass limits on supersymmetric particles are gradually pushed to higher values due to their continuing non-observation at the CERN LHC, looking for focus point regions in the supersymmetric parameter space, which shows considerably reduced fine-tuning, is increasingly more important than ever. We explore this in the context of gauge mediated supersymmetry breaking with messengers transforming in the adjoint representation of the gauge group, namely, octet of color SU(3) and triplet of weak SU(2). A distinctive feature of this scenario is that the focus point is achieved by fixing a single combination of parameters in the messenger sector, which is invariant under the renormalization group evolution. Because of this invariance, the focus point behavior is well under control once the relevant parameters are fixed by a more fundamental theory. The observed Higgs boson mass is explained with a relatively mild fine-tuning Δ = 60- 150. Interestingly, even in the presence of incomplete messenger multiplets of the SU(5) GUT group, the gauge couplings still unify perfectly, but at a scale which is one or two orders of magnitude above the conventional GUT scale. Because of this larger unification scale, the colored Higgs multiplets become too heavy to trigger proton decay at a rate larger than the experimentally allowed limit.

  15. Adjoint sensitivity analysis of thermoacoustic instability in a nonlinear Helmholtz solver

    NASA Astrophysics Data System (ADS)

    Juniper, Matthew; Magri, Luca

    2014-11-01

    Thermoacoustic instability is a persistent problem in aircraft and rocket engines. It occurs when heat release in the combustion chamber synchronizes with acoustic oscillations. It is always noisy and can sometimes result in catastrophic failure of the engine. Typically, the heat release from the flame is assumed to equal the acoustic velocity at a reference point multiplied by a spatially-varying function (the flame envelope) subject to a spatially-varying time delay. This models hydrodynamic perturbations convecting down the flame causing subsequent heat release perturbations. This creates an eigenvalue problem that is linear in the acoustic pressure but nonlinear in the complex frequency, omega. This can be solved as a sequence of linear eigenvalue problems in which the operators are updated with a new value of omega after each iteration. Adjoint methods find the sensitivity of each eigenmode to all the parameters simultaneously and are well suited to thermoacoustic problems because there are a few interesting eigenmodes but many influential parameters. The challenge here is to express the sensitivity of the eigenvalue at the final iteration to an arbitrary change in the parameters of the first iteration. This is a promising new technique for the control of thermoacoustics. European Research Council Grant Number 2590620.

  16. Adjoint-based optimization for the understanding of the aerodynamics of a flapping plate

    NASA Astrophysics Data System (ADS)

    Wei, Mingjun; Xu, Min

    2015-11-01

    An adjoint-based optimization is applied on a rigid flapping plate and a flexible flapping plate for drag reduction and for propulsive efficiency. Non-cylindrical calculus is introduced to handle the moving boundary. The rigid plate has a combined plunging and pitching motion with incoming flow, the control parameter is the phase delay which is considered first as a constant then as an arbitrary time-varying function. The optimal controls with different cost functions provide different strategies to reach maximum drag reduction or propulsive efficiency. The flexible plate has plunging, pitching, and deformation which is defined by the first two natural modes. With the same optimization goals, the control is instead the amplitude and phase delay of the pitching, the first eigen mode, and the second eigen mode. Similar analyses are taken to understand the conditions for drag reduction and propulsive efficiency when flexibility is involved. It is also shown that the flexibility plays a more important role at lower Reynolds number. Supported by AFOSR.

  17. Optimal ozone reduction policy design using adjoint-based NOx marginal damage information.

    PubMed

    Mesbah, S Morteza; Hakami, Amir; Schott, Stephan

    2013-01-01

    Despite substantial reductions in nitrogen oxide (NOx) emissions in the United States, the success of emission control programs in optimal ozone reduction is disputable because they do not consider the spatial and temporal differences in health and environmental damages caused by NOx emissions. This shortcoming in the current U.S. NOx control policy is explored, and various methodologies for identifying optimal NOx emission control strategies are evaluated. The proposed approach combines an optimization platform with an adjoint (or backward) sensitivity analysis model and is able to examine the environmental performance of the current cap-and-trade policy and two damage-based emissions-differentiated policies. Using the proposed methodology, a 2007 case study of 218 U.S. electricity generation units participating in the NOx trading program is examined. The results indicate that inclusion of damage information can significantly enhance public health performance of an economic instrument. The net benefit under the policy that minimizes the social cost (i.e., health costs plus abatement costs) is six times larger than that of an exchange rate cap-and-trade policy.

  18. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  19. Discrete SLn-connections and self-adjoint difference operators on 2-dimensional manifolds

    NASA Astrophysics Data System (ADS)

    Grinevich, P. G.; Novikov, S. P.

    2013-10-01

    The programme of discretization of famous completely integrable systems and associated linear operators was launched in the 1990s. In particular, the properties of second-order difference operators on triangulated manifolds and equilateral triangular lattices have been studied by Novikov and Dynnikov since 1996. This study included Laplace transformations, new discretizations of complex analysis, and new discretizations of GLn-connections on triangulated n-dimensional manifolds. A general theory of discrete GLn-connections 'of rank one' has been developed (see the Introduction for definitions). The problem of distinguishing the subclass of SLn-connections (and unimodular SLn+/- -connections, which satisfy detA = +/-1) has not been solved. In the present paper it is shown that these connections play an important role (which is similar to the role of magnetic fields in the continuous case) in the theory of self-adjoint Schrödinger difference operators on equilateral triangular lattices in ℝ2. In Appendix 1 a complete characterization is given of unimodular SLn+/- -connections of rank 1 for all n > 1, thus correcting a mistake (it was wrongly claimed that they reduce to a canonical connection for n > 2). With the help of a communication from Korepanov, a complete clarification is provided of how the classical theory of electrical circuits and star-triangle transformations is connected with the discrete Laplace transformations on triangular lattices. Bibliography: 29 titles.

  20. Fundamental Solutions and Optimal Control of Neutral Systems

    NASA Astrophysics Data System (ADS)

    Liu, Kai

    In this work, we shall consider standard optimal control problems for a class of neutral functional differential equations in Banach spaces. As the basis of a systematic theory of neutral models, the fundamental solution is constructed and a variation of constants formula of mild solutions is established. Necessary conditions in terms of the solutions of neutral adjoint systems are established to deal with the fixed time integral convex cost problem of optimality. Based on optimality conditions, the maximum principle for time varying control domain is presented.

  1. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  2. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  3. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  4. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  5. Aerosol Health Impact Source Attribution Studies with the CMAQ Adjoint Air Quality Model

    NASA Astrophysics Data System (ADS)

    Turner, M. D.

    Fine particulate matter (PM2.5) is an air pollutant consisting of a mixture of solid and liquid particles suspended in the atmosphere. Knowledge of the sources and distributions of PM2.5 is important for many reasons, two of which are that PM2.5 has an adverse effect on human health and also an effect on climate change. Recent studies have suggested that health benefits resulting from a unit decrease in black carbon (BC) are four to nine times larger than benefits resulting from an equivalent change in PM2.5 mass. The goal of this thesis is to quantify the role of emissions from different sectors and different locations in governing the total health impacts, risk, and maximum individual risk of exposure to BC both nationally and regionally in the US. We develop and use the CMAQ adjoint model to quantify the role of emissions from all modeled sectors, times, and locations on premature deaths attributed to exposure to BC. From a national analysis, we find that damages resulting from anthropogenic emissions of BC are strongly correlated with population and premature death. However, we find little correlation between damages and emission magnitude, suggesting that controls on the largest emissions may not be the most efficient means of reducing damages resulting from BC emissions. Rather, the best proxy for locations with damaging BC emissions is locations where premature deaths occur. Onroad diesel and nonroad vehicle emissions are the largest contributors to premature deaths attributed to exposure to BC, while onroad gasoline emissions cause the highest deaths per amount emitted. Additionally, emissions in fall and winter contribute to more premature deaths (and more per amount emitted) than emissions in spring and summer. From a regional analysis, we find that emissions from outside each of six urban areas account for 7% to 27% of the premature deaths attributed to exposure to BC within the region. Within the region encompassing New York City and Philadelphia

  6. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    SciTech Connect

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  7. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  8. Towards magnetic sounding of the Earth's core by an adjoint method

    NASA Astrophysics Data System (ADS)

    Li, K.; Jackson, A.; Livermore, P. W.

    2012-12-01

    Earth's magnetic field is generated and sustained by the so called geodynamo system in the core. Measurements of the geomagnetic field taken at the surface, downwards continued through the electrically insulating mantle to the core-mantle boundary (CMB), provide important constraints on the time evolution of the velocity, magnetic field and temperature anomaly in the fluid outer core. The aim of any study in data assimilation applied to the Earth's core is to produce a time-dependent model consistent with these observations [1]. Snapshots of these ``tuned" models provide a window through which the inner workings of the Earth's core, usually hidden from view, can be probed. We apply a variational data assimilation framework to an inertia-free magnetohydrodynamic system (MHD) [2]. Such a model is close to magnetostrophic balance [4], to which we have added viscosity to the dominant forces of Coriolis, pressure, Lorentz and buoyancy, believed to be a good approximation of the Earth's dynamo. As a starting point, we have chosen to neglect the buoyancy force, this being another unknown and, at this stage, an unnecessary complication. At the heart of the models is a time-dependent magnetic field which is interacting with the core flow (itself slaved to the magnetic field). Based on the methodology developed in Li et al. (2011) [3], we show further developments in which we apply the adjoint technique to our version of the Navier-Stokes equation in continuous form. In this talk, we present the initial results using perfect synthetic data without any observation error, performing closed-loop tests to demonstrate the ability of our model for retrieving the 3D structure of the velocity and the magnetic fields at the same time.

  9. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  10. Algorithmic differentiation and the calculation of forces by quantum Monte Carlo.

    PubMed

    Sorella, Sandro; Capriotti, Luca

    2010-12-21

    We describe an efficient algorithm to compute forces in quantum Monte Carlo using adjoint algorithmic differentiation. This allows us to apply the space warp coordinate transformation in differential form, and compute all the 3M force components of a system with M atoms with a computational effort comparable with the one to obtain the total energy. Few examples illustrating the method for an electronic system containing several water molecules are presented. With the present technique, the calculation of finite-temperature thermodynamic properties of materials with quantum Monte Carlo will be feasible in the near future.

  11. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  12. A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  13. Parametric solution, traveling wave solution for integrable dynamical system

    NASA Astrophysics Data System (ADS)

    Qiao, Zhijun; Holm, Darryl

    2002-11-01

    In this talk, I introduce a new integrable hierarchy of nonlinear dynamical equations. In this hierarchy there are the following representative equations: u_t=partial^5x u^-2/3, u_t=partial^5_xfrac(u^-1/3)_xx -2(u^-1/6)_x^2u,u_xxt+3u_xxu_x+u_xxxu=0. The first two are in the positive order hierarchy while the 3rd one is in the negative order hierarchy. The whole hierarchy is shown integrable through solving a key 3× 3 matrix equation. The 3×3 Lax pairs and their adjoint representations are nonlinearized to be two Liouville-integrable canonical Hamiltonian systems. Based on the integrability of 6N-dimensional systems we give the parametric solution of the positive hierarchy.In particular, we obtain the parametric solution of the equation u_t=partial^5x u^-2/3. Finally, we give the travelling wave solution (TWS) of the above three equations. The TWSs of the first two equations have singularity, but the TWS of the 3rd one is continuous. For the 5th-order equation, its smooth parametric solution can not include its singular TWS. We also analyse the initial Gaussian solutions for the equations u_t=partial^5x u^-2/3, and u_xxt+3u_xxu_x+u_xxxu=0. The former is stable, but the latter is not.

  14. [Diagnosis of liver diseases by classification of laboratory signal factor pattern findings with the Mahalanobis·Taguchi Adjoint method].

    PubMed

    Nakajima, Hisato; Yano, Kouya; Uetake, Shinichirou; Takagi, Ichiro

    2012-02-01

    There are many autoimmune liver diseases in which diagnosis is difficult so that overlap is accepted, and this negatively affects treatment. The initial diagnosis is therefore important for later treatment and convalescence. We distinguished autoimmune cholangitis, autoimmune hepatitis and primary biliary cirrhosis by the Mahalanobis·Taguchi Adjoint (MTA) method in the Mahalanobis·Taguchi system and analyzed the pattern of factor effects by the MTA method. As a result, the characteristic factor effect pattern of each disease was classified, enabling the qualitative evaluation of cases including overlapping cases which were difficult to diagnose.

  15. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  16. An Iterative Solution to the Nonlinear Time-Discrete TEM Model - The Occurrence of Chaos and a Control Theoretic Algorithmic Approach

    NASA Astrophysics Data System (ADS)

    Pickl, S.

    2002-09-01

    This paper is concerned with a mathematical derivation of the nonlinear time-discrete Technology-Emissions Means (TEM-) model. A detailed introduction to the dynamics modelling a Joint Implementation Program concerning Kyoto Protocol is given at the end of the paper. As the nonlinear time-discrete dynamics tends to chaotic behaviour, the necessary introduction of control parameters in the dynamics of the TEM model leads to new results in the field of time-discrete control systems. Furthermore the numerical results give new insights into a Joint-Implementation Program and herewith, they may improve this important economic tool. The iterative solution presented at the end might be a useful orientation to anticipate and support Kyoto Process.

  17. Transient sensitivities of sea ice export through the Canadian Arctic Archipelago inferred from a coupled ocean/sea-ice adjoint model

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Losch, M.; Menemenlis, D.; Campin, J.; Hill, C.

    2008-12-01

    The sensitivity of sea-ice export through the Canadian Arctic Archipelago (CAA), measured in terms of its solid freshwater export through Lancaster Sound, to changes in various elements of the ocean and sea-ice state, and to elements of the atmospheric forcing fields through time and space is assessed by means of a coupled ocean/sea-ice adjoint model. The adjoint model furnishes full spatial sensitivity maps (also known as Lagrange multipliers) of the export metric to a variety of model variables at any chosen point in time, providing the unique capability to quantify major drivers of sea-ice export variability. The underlying model is the MIT ocean general circulation model (MITgcm), which is coupled to a Hibler-type dynamic/thermodynamic sea-ice model. The configuration is based on the Arctic face of the ECCO3 high-resolution cubed-sphere model, but coarsened to 36-km horizontal grid spacing. The adjoint of the coupled system has been derived by means of automatic differentiation using the software tool TAF. Finite perturbation simulations are performed to check the information provided by the adjoint. The sea-ice model's performance in the presence of narrow straits is assessed with different sea-ice lateral boundary conditions. The adjoint sensitivity clearly exposes the role of the model trajectory and the transient nature of the problem. The complex interplay between forcing, dynamics, and boundary condition is demonstrated in the comparison between the different calculations. The study is a step towards fully coupled adjoint-based ocean/sea-ice state estimation at basin to global scales as part of the ECCO efforts.

  18. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  19. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  20. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  1. High-resolution Adjoint Tomography of the Eastern Venezuelan Crust using Empirical Green's Function Waveforms from Ambient Noise Interferometry

    NASA Astrophysics Data System (ADS)

    Chen, M.; Masy, J.; Niu, F.; Levander, A.

    2014-12-01

    We present a high-resolution 3D crustal model of Eastern Venezuela from a full waveform inversion adjoint tomography technique, based on the spectral-element method. Empirical Green's functions (EGFs) of Rayleigh waves from ambient noise interferometry serve as the observed waveforms. Rayleigh wave signals in the period range of 10 - 50 s were extracted by cross-correlations of 48 stations from both Venezuelan national seismic network and the BOLIVAR project array. The synthetic Green's functions (SGFs) are calculated with an initial regional 3D shear wave model determined from ballistic Rayleigh wave tomography from earthquake records with periods longer than 20 s. The frequency-dependent traveltime time misfits between the SGFs and EGFs are minimized iteratively using adjoint tomography = to refine 3D crustal structure [Chen et al. 2014]. The final 3D model shows lateral shear wave velocity variations that are well correlated with the geological terranes within the continental interior. In particular, the final model reveals low velocities distributed along the axis of the Espino Graben, indicating that the graben has a substantially different crustal structure than the rest of the Eastern Venezuela Basin. We also observe high shear velocities in the lower crust beneath some of the subterranes of the Proterozoic-Archean Guayana Shield.

  2. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  3. Stabilization of feedback control and stabilizability optimal solution for nonlinear quadratic problems

    NASA Astrophysics Data System (ADS)

    Popescu, Mihai; Dumitrache, Alexandru

    2011-05-01

    This study refers to minimization of quadratic functionals in infinite time. The coefficients of the quadratic form are quadratic matrix, function of the state variable. Dynamic constraints are represented by bilinear differential systems of the form x˙=A(x)x+B(x)u,x(0)=x0. One selects an adequate factorization of A( x) such that the analyzed system should be controllable. Employing the Hamilton-Jacobi equation it results the matrix algebraic equation of Riccati associated to the optimum problem. The necessary extremum conditions determine the adjoint variables λ and the control variables u as functions of state variable, as well as the adjoint system corresponding to those functions. Thus one obtains a matrix differential equation where the solution representing the positive defined symmetric matrix P( x), verifies the Riccati algebraic equation. The stability analysis for the autonomous systems solution resulting for the determined feedback control is performed using the Liapunov function method. Finally we present certain significant cases.

  4. An exact solution of the Jackiw-Rebbi equations for a fermion-monopole-Higgs system

    NASA Astrophysics Data System (ADS)

    Din, A. M.; Roy, S. M.

    1983-09-01

    We present an exact solution for arbitrary partial waves to the Jackiw-Rebbi equations for an isospinor fermion in the background of a non-abelian singular magnetic monopole and a Higgs field. The Higgs coupling produces a centrifugal barrier making the hamiltonian self-adjoint with ordinary boundary conditions at the origin. There are infinitely many bound states, each doubly degenerate. The scattering is charge conserving.

  5. An Adjoint-Based Analysis of the Sampling Footprints of Tall Tower, Aircraft and Potential Future Lidar Observations of CO2

    NASA Technical Reports Server (NTRS)

    Andrews, Arlyn; Kawa, Randy; Zhu, Zhengxin; Burris, John; Abshire, Jim

    2004-01-01

    A detailed mechanistic understanding of the sources and sinks of CO2 will be required to reliably predict future CO2 levels and climate. A commonly used technique for deriving information about CO2 exchange with surface reservoirs is to solve an 'inverse problem', where CO2 observations are used with an atmospheric transport model to find the optimal distribution of sources and sinks. Synthesis inversion methods are powerful tools for addressing this question, but the results are disturbingly sensitive to the details of the calculation. Studies done using different atmospheric transport models and combinations of surface station data have produced substantially different distributions of surface fluxes. Adjoint methods are now being developed that will more effectively incorporate diverse datasets in estimates of surface fluxes of CO2. In an adjoint framework, it will be possible to combine CO2 concentration data from longterm surface and aircraft monitoring stations with data from intensive field campaigns and with proposed future satellite observations. We have recently developed an adjoint for the GSFC 3-D Parameterized Chemistry and Transport Model (PCTM). Here, we will present results from a PCTM Adjoint study comparing the sampling footprints of tall tower, aircraft and potential future lidar observations of CO2. The vertical resolution and extent of the profiles and the observation frequency will be considered for several sites in North America.

  6. Spectral-Element Simulations of Wave Propagation in Porous Media: Finite-Frequency Sensitivity Kernels Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Morency, C.; Tromp, J.

    2008-12-01

    successfully performed. We present finite-frequency sensitivity kernels for wave propagation in porous media based upon adjoint methods. We first show that the adjoint equations in porous media are similar to the regular Biot equations upon defining an appropriate adjoint source. Then we present finite-frequency kernels for seismic phases in porous media (e.g., fast P, slow P, and S). These kernels illustrate the sensitivity of seismic observables to structural parameters and form the basis of tomographic inversions. Finally, we show an application of this imaging technique related to the detection of buried landmines and unexploded ordnance (UXO) in porous environments.

  7. Top-Down Inversion of Aerosol Emissions through Adjoint Integration of Satellite Radiance and GEOS-Chem Chemical Transport Model

    NASA Astrophysics Data System (ADS)

    Xu, X.; Wang, J.; Henze, D. K.; Qu, W.; Kopacz, M.

    2012-12-01

    The knowledge of aerosol emissions from both natural and anthropogenic sources are needed to study the impacts of tropospheric aerosol on atmospheric composition, climate, and human health, but large uncertainties persist in quantifying the aerosol sources with the current bottom-up methods. This study presents a new top-down approach that spatially constrains the amount of aerosol emissions from satellite (MODIS) observed reflectance with the adjoint of a chemistry transport model (GEOS-Chem). We apply this technique with a one-month case study (April 2008) over the East Asia. The bottom-up estimated sulfate-nitrate-ammonium precursors, such as sulfur dioxide (SO2), ammonia (NH3), and nitrogen oxides (NOx), all from INTEX-B 2006 inventory, emissions of black carbon (BC), organic carbon (OC) from Bond-2007 inventory, and mineral dust simulated from DEAD dust mobilization scheme, are spatially optimized from the GEOS-Chem model and its adjoint constrained by the aerosol optical depth (AOD) that are derived from MODIS reflectance with the GEOS-Chem aerosol single scattering properties. The adjoint inverse modeling for the study period yields notable decreases in anthropogenic aerosol emissions over China: 436 Gg (33.5%) for SO2, 378 Gg (34.5%) for NH3, 319 (18.8%) for NOx, 10 Gg (9.1%) for BC, and 30 Gg (15.0%) for OC. The total amount of the mineral dust emission is reduced by 56.4% from the DEAD mobilization module which simulates dust production of 19020 Gg. Sub-regional adjustments are significant and directions of changes are spatially different. The model simulation with optimized aerosol emissions shows much better agreement with independent observations from sun-spectrophotometer observed AOD from AERONET, MISR (Multi-angle Imaging SpectroRadiometer) AOD, OMI (Ozone Monitoring Instrument) NO2 and SO2 columns, and surface aerosol concentrations measured over both anthropogenic pollution and dust source regions. Assuming the used bottom-up anthropogenic

  8. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  9. Adjoint Method and Predictive Control for 1-D Flow in NASA Ames 11-Foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Ardema, Mark

    2006-01-01

    This paper describes a modeling method and a new optimal control approach to investigate a Mach number control problem for the NASA Ames 11-Foot Transonic Wind Tunnel. The flow in the wind tunnel is modeled by the 1-D unsteady Euler equations whose boundary conditions prescribe a controlling action by a compressor. The boundary control inputs to the compressor are in turn controlled by a drive motor system and an inlet guide vane system whose dynamics are modeled by ordinary differential equations. The resulting Euler equations are thus coupled to the ordinary differential equations via the boundary conditions. Optimality conditions are established by an adjoint method and are used to develop a model predictive linear-quadratic optimal control for regulating the Mach number due to a test model disturbance during a continuous pitch

  10. Numerical study on spatially varying bottom friction coefficient of a 2D tidal model with adjoint method

    NASA Astrophysics Data System (ADS)

    Lu, Xianqing; Zhang, Jicai

    2006-10-01

    Based on the simulation of M2 tide in the Bohai Sea, the Yellow Sea and the East China Sea, TOPEX/Poseidon altimeter data are assimilated into a 2D tidal model to study the spatially varying bottom friction coefficient (BFC) by using the adjoint method. In this study, the BFC at some grid points are selected as the independent BFC, while the BFC at other grid points can be obtained through linear interpolation with the independent BFC. Two strategies for selecting the independent BFC are discussed. In the first strategy, one independent BFC is uniformly selected from each 1°×1° area. In the second one, the independent BFC are selected based on the spatial distribution of water depth. Twin and practical experiments are carried out to compare the two strategies. In the twin experiments, the adjoint method has a strong ability of inverting the prescribed BFC distributions combined with the spatially varying BFC. In the practical experiments, reasonable simulation results can be obtained by optimizing the spatially varying independent BFC. In both twin and practical experiments, the simulation results with the second strategy are better than those with the first one. The BFC distribution obtained from the practical experiment indicates that the BFC in shallow water are larger than those in deep water in the Bohai Sea, the North Yellow Sea, the South Yellow Sea and the East China Sea individually. However, the BFC in the East China Sea are larger than those in the other areas perhaps because of the large difference of water depth or bottom roughness. The sensitivity analysis indicates that the model results are more sensitive to the independent BFC near the land.

  11. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    SciTech Connect

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.

  12. Optical tomography reconstruction algorithm with the finite element method: An optimal approach with regularization tools

    SciTech Connect

    Balima, O.; Favennec, Y.; Rousse, D.

    2013-10-15

    Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.

  13. Numerical solution of a semilinear elliptic equation via difference scheme

    NASA Astrophysics Data System (ADS)

    Beigmohammadi, Elif Ozturk; Demirel, Esra

    2016-08-01

    We consider the Bitsadze-Samarskii type nonlocal boundary value problem { -d/2v (t ) d t2 +B v (t ) =h (t ,v (t ) ) ,0 adjoint positive definite operator B. For the approximate solution of problem (1), we use the first order of accuracy difference scheme. The numerical results are computed by MATLAB.

  14. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  15. Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Wu, Qinghua; Cai, Zhihua

    At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.

  16. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  17. How-To-Do-It: Multiple Allelic Frequencies in Populations at Equilibrium: Algorithms and Applications.

    ERIC Educational Resources Information Center

    Nussbaum, Francis, Jr.

    1988-01-01

    Presents an algorithm for solving problems related to multiple allelic frequencies in populations at equilibrium. Considers sample problems and provides their solution using this tabular algorithm. (CW)

  18. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  19. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    DOE PAGES

    Finn, John M.

    2015-03-01

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less

  20. Issues in measure-preserving three dimensional flow integrators: Self-adjointness, reversibility, and non-uniform time stepping

    SciTech Connect

    Finn, John M.

    2015-03-15

    Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012

  1. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  2. Sources and Processes Affecting Fine Particulate Matter Pollution over North China: An Adjoint Analysis of the Beijing APEC Period.

    PubMed

    Zhang, Lin; Shao, Jingyuan; Lu, Xiao; Zhao, Yuanhong; Hu, Yongyun; Henze, Daven K; Liao, Hong; Gong, Sunling; Zhang, Qiang

    2016-08-16

    The stringent emission controls during the APEC 2014 (the Asia-Pacific Economic Cooperation Summit; November 5-11, 2014) offer a unique opportunity to quantify factors affecting fine particulate matter (PM2.5) pollution over North China. Here we apply a four-dimensional variational data assimilation system using the adjoint model of GEOS-Chem to address this issue. Hourly surface measurements of PM2.5 and SO2 for October 15-November 14, 2014 are assimilated into the model to optimize daily aerosol primary and precursor emissions over North China. Measured PM2.5 concentrations in Beijing average 50.3 μg m(-3) during APEC, 43% lower than the mean concentration (88.2 μg m(-3)) for the whole period including APEC. Model results attribute about half of the reduction to meteorology due to active cold surge occurrences during APEC. Assimilation of surface measurements largely reduces the model biases and estimates 6%-30% lower aerosol emissions in the Beijing-Tianjin-Hebei region during APEC than in late October. We further demonstrate that high PM2.5 events in Beijing during this period can be occasionally contributed by natural mineral dust, but more events show large sensitivities to inorganic aerosol sources, particularly emissions of ammonia (NH3) and nitrogen oxides (NOx) reflecting strong formation of aerosol nitrate in the fall season. PMID:27434821

  3. Evaluating Observational Constraints on N2O Emissions via Information Content Analysis Using GEOS-Chem and its Adjoint

    NASA Astrophysics Data System (ADS)

    Wells, K. C.; Millet, D. B.; Bousserez, N.; Henze, D. K.; Chaliyakunnel, S.; Griffis, T. J.; Dlugokencky, E. J.; Prinn, R. G.; O'Doherty, S.; Weiss, R. F.; Dutton, G. S.; Elkins, J. W.; Krummel, P. B.; Langenfelds, R. L.; Steele, P.

    2015-12-01

    Nitrous oxide (N2O) is a long-lived greenhouse gas with a global warming potential approximately 300 times that of CO2, and plays a key role in stratospheric ozone depletion. Human perturbation of the nitrogen cycle has led to a rise in atmospheric N2O, but large uncertainties exist in the spatial and temporal distribution of its emissions. Here we employ a 4D-Var inversion framework for N2O based on the GEOS-Chem chemical transport model and its adjoint to derive new constraints on the space-time distribution of global land and ocean N2O fluxes. Based on an ensemble of global surface measurements, we find that emissions are overestimated over Northern Hemisphere land areas and underestimated in the Southern Hemisphere. Assigning these biases to particular land or ocean regions is more difficult given the long lifetime of N2O. To quantitatively evaluate where the current N2O observing network provides local and regional emission constraints, we apply a new, efficient information content analysis technique involving radial basis functions. The technique yields optimal state vector dimensions for N2O source inversions, with model grid cells grouped in space and time according to the resolution that can actually be provided by the network of global observations. We then use these optimal state vectors in an analytical inversion to refine current top-down emission estimates.

  4. Projecting Future Changes in Seasonal Vegetative Exposure to Ozone in the Western US Using GEOS-Chem Adjoint

    NASA Astrophysics Data System (ADS)

    Lapina, K.; Henze, D. K.; Milford, J. B.

    2014-12-01

    Frequent exposure to elevated levels of ozone leads to negative impacts on ecosystems including the loss of ozone-sensitive tree species and agricultural crops in many regions of the United States. Information on emission sources contributing to these losses is crucial for developing a successful strategy to mitigate the negative effects of ozone on vegetation. A cumulative ozone exposure metric, W126, has been considered by the US EPA for use as secondary ozone standard. The rural West of the US has been demonstrated to have an especially great potential for disconnect between attaining primary versus W126-based ozone standards. In this work we separate the relative impact of emissions sources for the W126 in the Western US using forward and adjoint simulations with the global chemical transport model GEOS-Chem. The obtained source contributions are separated by different locations, species, and sectors and are combined with representative concentration pathway (RCP) anthropogenic emission scenarios to project future changes in W126 through 2050. Focusing on the foreign influences we find that the change in Chinese emissions alone is projected to lead to up to 20% increase in the W126 levels in the West and is strongly dependent on the RCP scenario. We further use concentration-response functions based on the W126 index to estimate the loss of four ozone-sensitive species in the West - ponderosa pine, Douglas Fir, red alder and quacking aspen.

  5. The Adjoint Method for The Optimization of Brachytherapy and Radiotherapy Patient Treatment Planning Procedures Using Monte Carlo Calculations

    SciTech Connect

    D.L. Henderson; S. Yoo; M. Kowalok; T.R. Mackie; B.R. Thomadsen

    2001-10-30

    The goal of this project is to investigate the use of the adjoint method, commonly used in the reactor physics community, for the optimization of radiation therapy patient treatment plans. Two different types of radiation therapy are being examined, interstitial brachytherapy and radiotherapy. In brachytherapy radioactive sources are surgically implanted within the diseased organ such as the prostate to treat the cancerous tissue. With radiotherapy, the x-ray source is usually located at a distance of about 1-metere from the patient and focused on the treatment area. For brachytherapy the optimization phase of the treatment plan consists of determining the optimal placement of the radioactive sources, which delivers the prescribed dose to the disease tissue while simultaneously sparing (reducing) the dose to sensitive tissue and organs. For external beam radiation therapy the optimization phase of the treatment plan consists of determining the optimal direction and intensity of beam, which provides complete coverage of the tumor region with the prescribed dose while simultaneously avoiding sensitive tissue areas. For both therapy methods, the optimal treatment plan is one in which the diseased tissue has been treated with the prescribed dose and dose to the sensitive tissue and organs has been kept to a minimum.

  6. Global approach for transient shear wave inversion based on the adjoint method: a comprehensive 2D simulation study.

    PubMed

    Arnal, B; Pinton, G; Garapon, P; Pernot, M; Fink, M; Tanter, M

    2013-10-01

    Shear wave imaging (SWI) maps soft tissue elasticity by measuring shear wave propagation with ultrafast ultrasound acquisitions (10 000 frames s(-1)). This spatiotemporal data can be used as an input for an inverse problem that determines a shear modulus map. Common inversion methods are local: the shear modulus at each point is calculated based on the values of its neighbour (e.g. time-of-flight, wave equation inversion). However, these approaches are sensitive to the information loss such as noise or the lack of the backscattered signal. In this paper, we evaluate the benefits of a global approach for elasticity inversion using a least-squares formulation, which is derived from full waveform inversion in geophysics known as the adjoint method. We simulate an acoustic waveform in a medium with a soft and a hard lesion. For this initial application, full elastic propagation and viscosity are ignored. We demonstrate that the reconstruction of the shear modulus map is robust with a non-uniform background or in the presence of noise with regularization. Compared to regular local inversions, the global approach leads to an increase of contrast (∼+3 dB) and a decrease of the quantification error (∼+2%). We demonstrate that the inversion is reliable in the case when there is no signal measured within the inclusions like hypoechoic lesions which could have an impact on medical diagnosis.

  7. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    NASA Astrophysics Data System (ADS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high

  8. Integral representation of singular solutions to BVP for the wave equation

    NASA Astrophysics Data System (ADS)

    Nikolov, Aleksey

    2014-12-01

    We consider the Protter problem for the four-dimensional wave equation, where the boundary conditions are posed on a characteristic surface and on a non-characteristic one. In particular, we consider a case when the right-hand side of the equation is of the form of harmonic polynomial. This problem is known to be ill-posed, because its adjoint homogeneous problem has infinitely many nontrivial classical solutions. The solutions of the Protter problem may have strong power type singularity isolated at one boundary point. Bounded solutions are possible only if the right-hand side of the equation is orthogonal to all the classical solutions of the adjoint homogeneous problem, which in fact is a necessary but not sufficient condition for the classical solvability of the problem. In this paper we offer an explicit integral form of the solutions of the problem, which is more simple than the known so far. Additionally, we give a condition on the coefficients of the harmonic polynomial to obtain not only bounded but also continuous solution.

  9. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  10. A novel chaos danger model immune algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying

    2013-11-01

    Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.

  11. Comparison of three different methods of perturbing the potential vorticity field in mesoscale forecasts of Mediterranean heavy precipitation events: PV-gradient, PV-adjoint and PV-satellite

    NASA Astrophysics Data System (ADS)

    Vich, M.; Romero, R.; Richard, E.; Arbogast, P.; Maynard, K.

    2010-09-01

    Heavy precipitation events occur regularly in the western Mediterranean region. These events often have a high impact on the society due to economic and personal losses. The improvement of the mesoscale numerical forecasts of these events can be used to prevent or minimize their impact on the society. In previous studies, two ensemble prediction systems (EPSs) based on perturbing the model initial and boundary conditions were developed and tested for a collection of high-impact MEDEX cyclonic episodes. These EPSs perturb the initial and boundary potential vorticity (PV) field through a PV inversion algorithm. This technique ensures modifications of all the meteorological fields without compromising the mass-wind balance. One EPS introduces the perturbations along the zones of the three-dimensional PV structure presenting the local most intense values and gradients of the field (a semi-objective choice, PV-gradient), while the other perturbs the PV field over the MM5 adjoint model calculated sensitivity zones (an objective method, PV-adjoint). The PV perturbations are set from a PV error climatology (PVEC) that characterizes typical PV errors in the ECMWF forecasts, both in intensity and displacement. This intensity and displacement perturbation of the PV field is chosen randomly, while its location is given by the perturbation zones defined in each ensemble generation method. Encouraged by the good results obtained by these two EPSs that perturb the PV field, a new approach based on a manual perturbation of the PV field has been tested and compared with the previous results. This technique uses the satellite water vapor (WV) observations to guide the correction of initial PV structures. The correction of the PV field intents to improve the match between the PV distribution and the WV image, taking advantage of the relation between dark and bright features of WV images and PV anomalies, under some assumptions. Afterwards, the PV inversion algorithm is applied to run

  12. A Generalized Framework for Constrained Design Optimization of General Supersonic Configurations Using Adjoint Based Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Karman, Steve L., Jr.

    2011-01-01

    The Aeronautics Research Mission Directorate (ARMD) sent out an NASA Research Announcement (NRA) for proposals soliciting research and technical development. The proposed research program was aimed at addressing the desired milestones and outcomes of ROA (ROA-2006) Subtopic A.4.1.1 Advanced Computational Methods. The second milestone, SUP.1.06.02 Robust, validated mesh adaptation and error quantification for near field Computational Fluid Dynamics (CFD), was addressed by the proposed research. Additional research utilizing the direct links to geometry through a CAD interface enabled by this work will allow for geometric constraints to be applied and address the final milestone, SUP2.07.06 Constrained low-drag supersonic aerodynamic design capability. The original product of the proposed research program was an integrated system of tools that can be used for the mesh mechanics required for rapid high fidelity analysis and for design of supersonic cruise vehicles. These Euler and Navier-Stokes volume grid manipulation tools were proposed to efficiently use parallel processing. The mesh adaptation provides a systematic approach for achieving demonstrated levels of accuracy in the solutions. NASA chose to fund only the mesh generation/adaptation portion of the proposal. So this report describes the completion of the proposed tasks for mesh creation, manipulation and adaptation as it pertains to sonic boom prediction of supersonic configurations.

  13. Multiparameter adjoint tomography of the crust and upper mantle beneath East Asia: 1. Model construction and comparisons

    NASA Astrophysics Data System (ADS)

    Chen, Min; Niu, Fenglin; Liu, Qinya; Tromp, Jeroen; Zheng, Xiufen

    2015-03-01

    We present a 3-D radially anisotropic model of the crust and mantle beneath East Asia down to 900 km depth. Adjoint tomography based on a spectral element method is applied to a phenomenal data set comprising 1.7 million frequency-dependent traveltime measurements from waveforms of 227 earthquakes recorded by 1869 stations. Compressional wave speeds are independently constrained and simultaneously inverted along with shear wave speeds (VSH and VSV) using the same waveform data set with comparable resolution. After 20 iterations, the new model (named EARA2014) exhibits sharp and detailed wave speed anomalies with improved correlations with surface tectonic units compared to previous models. In the upper 100 km, high wave speed (high-V) anomalies correlate very well with the Junggar and Tarim Basins, the Ordos Block, and the Yangtze Platform, while strong low wave speed (low-V) anomalies coincide with the Qiangtang Block, the Songpan Ganzi Fold Belt, the Chuandian Block, the Altay-Sayan Mountain Range, and the back-arc basins along the Pacific and Philippine Sea Plate margins. At greater depths, narrow high-V anomalies correspond to major subduction zones and broad high-V anomalies to cratonic roots in the upper mantle and fragmented slabs in the mantle transition zone. In particular, EARA2014 reveals a strong high-V structure beneath Tibet, appearing below 100 km depth and extending to the bottom of the mantle transition zone, and laterally spanning across the Lhasa and Qiangtang Blocks. In this paper we emphasize technical aspects of the model construction and provide a general discussion through comparisons.

  14. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  15. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  16. Time-periodic solutions of the Benjamin-Ono equation

    SciTech Connect

    Ambrose , D.M.; Wilkening, Jon

    2008-04-01

    We present a spectrally accurate numerical method for finding non-trivial time-periodic solutions of non-linear partial differential equations. The method is based on minimizing a functional (of the initial condition and the period) that is positive unless the solution is periodic, in which case it is zero. We solve an adjoint PDE to compute the gradient of this functional with respect to the initial condition. We include additional terms in the functional to specify the free parameters, which, in the case of the Benjamin-Ono equation, are the mean, a spatial phase, a temporal phase and the real part of one of the Fourier modes at t = 0. We use our method to study global paths of non-trivial time-periodic solutions connecting stationary and traveling waves of the Benjamin-Ono equation. As a starting guess for each path, we compute periodic solutions of the linearized problem by solving an infinite dimensional eigenvalue problem in closed form. We then use our numerical method to continue these solutions beyond the realm of linear theory until another traveling wave is reached (or until the solution blows up). By experimentation with data fitting, we identify the analytical form of the solutions on the path connecting the one-hump stationary solution to the two-hump traveling wave. We then derive exact formulas for these solutions by explicitly solving the system of ODE's governing the evolution of solitons using the ansatz suggested by the numerical simulations.

  17. General formalism for the efficient calculation of the Hessian matrix of EM data misfit and Hessian-vector products based upon adjoint sources approach

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2015-03-01

    3-D electromagnetic (EM) studies of the Earth have advanced significantly over the past decade. Despite a certain success of the 3-D EM inversions of real data sets, the quantitative assessment of the recovered models is still a challenging problem. It is known that one can gain valuable information about model uncertainties from the analysis of Hessian matrix. However, even with modern computational capabilities the calculation of the Hessian matrix based on numerical differentiation is extremely time consuming. Much more efficient way to compute the Hessian matrix is provided by an `adjoint sources' methodology. The computation of Hessian matrix (and Hessian-vector products) using adjoint formulation is now well-established approach, especially in seismic inverse modelling. As for EM inverse modelling we did not find in the literature a description of the approach, which would allow EM researchers to apply this methodology in a straightforward manner to their scenario of interest. In the paper, we present formalism for the efficient calculation of the Hessian matrix using adjoint sources approach. We also show how this technique can be implemented to calculate multiple Hessian-vector products very efficiently. The formalism is general in the sense that it allows to work with responses that arise in EM problem set-ups either with natural- or controlled-source excitations. The formalism allows for various types of parametrization of the 3-D conductivity distribution. Using this methodology one can readily obtain appropriate formulae for the specific sounding methods. To illustrate the concept we provide such formulae for two EM techniques: magnetotellurics and controlled-source sounding with vertical magnetic dipole as a source.

  18. Relating health and climate impacts to grid-scale emissions using adjoint sensitivity modeling for the Climate and Clean Air Coalition

    NASA Astrophysics Data System (ADS)

    Henze, D. K.; Lacey, F.; Seltzer, M.; Vallack, H.; Kuylenstierna, J.; Bowman, K. W.; Anenberg, S.; Sasser, E.; Lee, C. J.; Martin, R.

    2013-12-01

    The Climate and Clean Air Coalition (CCAC) was initiated in 2012 to develop, understand and promote measures to reduce short lived climate forcers such as aerosol, ozone and methane. The Coalition now includes over 30 nations, and as a service to these nations is committed to providing a decision support toolkit that allows member nations to explore the benefits of a range of emissions mitigation measures in terms of the combined impacts on air quality and climate and so help in the development of their National Action Plans. Here we will present recent modeling work to support the development of the CCAC National Action Plans toolkit. Adjoint sensitivity analysis is presented as a means of efficiently relating air quality, climate and crop impacts back to changes in emissions from each species, sector and location at the grid-scale resolution of typical global air quality model applications. The GEOS-Chem adjoint model is used to estimate the damages per ton of emissions of PM2.5 related mortality, the impacts of ozone precursors on crops and ozone-related health effects, and the combined impacts of these species on regional surface temperature changes. We show how the benefits-per-emission vary spatially as a function of the surrounding environment, and how this impacts the overall benefit of sector-specific control strategies. We present initial findings for Bangladesh, as well as Mexico, Ghana and Colombia, some of the first countries to join the CCAC, and discuss general issues related to adjoint-based metrics for quantifying air quality and climate co-benefits.

  19. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  20. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.