Science.gov

Sample records for adjoint solution algorithm

  1. GPU-accelerated adjoint algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  2. GPU-Accelerated Adjoint Algorithmic Differentiation

    PubMed Central

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2015-01-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography

  3. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented

  4. An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.

    2003-01-01

    An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.

  5. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  6. Solution of the self-adjoint radiative transfer equation on hybrid computer systems

    NASA Astrophysics Data System (ADS)

    Gasilov, V. A.; Kuchugov, P. A.; Olkhovskaya, O. G.; Chetverushkin, B. N.

    2016-06-01

    A new technique for simulating three-dimensional radiative energy transfer for the use in the software designed for the predictive simulation of plasma with high energy density on parallel computers is proposed. A highly scalable algorithm that takes into account the angular dependence of the radiation intensity and is free of the ray effect is developed based on the solution of a second-order equation with a self-adjoint operator. A distinctive feature of this algorithm is a preliminary transformation of rotation to eliminate mixed derivatives with respect to the spatial variables, simplify the structure of the difference operator, and accelerate the convergence of the iterative solution of the equation. It is shown that the proposed method correctly reproduces the limiting cases—isotropic radiation and the directed radiation with a δ-shaped angular distribution.

  7. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, L.F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  8. Nonlinear self adjointness, conservation laws and exact solutions of ill-posed Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Yaşar, Emrullah; San, Sait; Özkan, Yeşim Sağlam

    2016-01-01

    In this work, we consider the ill-posed Boussinesq equation which arises in shallow water waves and non-linear lattices. We prove that the ill-posed Boussinesq equation is nonlinearly self-adjoint. Using this property and Lie point symmetries, we construct conservation laws for the underlying equation. In addition, the generalized solitonary, periodic and compact-like solutions are constructed by the exp-function method.

  9. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  10. Efficient forward and adjoint calculations of normal mode spectra in laterally heterogeneous earth models using an iterative direct solution method

    NASA Astrophysics Data System (ADS)

    Al-Attar, D.; Woodhouse, J. H.

    2011-12-01

    Normal mode spectra provide a valuable data set for global seismic tomography, and, notably, are amongst the few geophysical observables that are sensitive to lateral variations in density structure within the Earth. Nonetheless, the effects of lateral density variations on mode spectra are rather subtle. In order, therefore, to reliably determine density variations with in the earth, it is necessary to make use of sufficiently accurate methods for calculating synthetic mode spectra. In particular, recent work has highlighted the need to perform 'full-coupling calculations' that take into account the interaction of large numbers of spherical earth multiplets. However, present methods for performing such full-coupling calculations require diagonalization of large coupling matrices, and so become computationally inefficient as the number of coupled modes is increased. In order to perform full-coupling calculations more efficiently, we describe a new implementation of the direct solution method for calculating synthetic spectra in laterally heterogeneous earth models. This approach is based on the solution of the inhomogeneous mode coupling equations in the frequency domain, and does not require the diagonalization of large matrices. Early implementations of the direct solution method used LU-decomposition to solve the mode coupling equations. However, as the number of coupled modes is increased, this method becomes impractically slow. To circumvent this problem, we solve the mode coupling equations iteratively using the preconditioned biconjugate gradient algorithm. We present a number of numerical tests to display the accuracy and efficiency of this method for performing large full-coupling calculations. In addition, we describe a frequency-domain formulation of the adjoint method for the calculation of Frechet kernels that show the sensitivity of normal mode observations to variations in earth structure. The calculation of such Frechet kernels involves one solution

  11. Spectral Solutions of Self-adjoint Elliptic Problems with Immersed Interfaces

    SciTech Connect

    Auchmuty, G.; Kloucek, P.

    2011-12-15

    This paper describes a spectral representation of solutions of self-adjoint elliptic problems with immersed interfaces. The interface is assumed to be a simple non-self-intersecting closed curve that obeys some weak regularity conditions. The problem is decomposed into two problems, one with zero interface data and the other with zero exterior boundary data. The problem with zero interface data is solved by standard spectral methods. The problem with non-zero interface data is solved by introducing an interface space H{sub {Gamma}}({Omega}) and constructing an orthonormal basis of this space. This basis is constructed using a special class of orthogonal eigenfunctions analogously to the methods used for standard trace spaces by Auchmuty (SIAM J. Math. Anal. 38, 894-915, 2006). Analytical and numerical approximations of these eigenfunctions are described and some simulations are presented.

  12. Haydock’s recursive solution of self-adjoint problems. Discrete spectrum

    SciTech Connect

    Moroz, Alexander

    2014-12-15

    Haydock’s recursive solution is shown to underline a number of different concepts such as (i) quasi-exactly solvable models, (ii) exactly solvable models, (iii) three-term recurrence solutions based on Schweber’s quantization criterion in Hilbert spaces of entire analytic functions, and (iv) a discrete quantum mechanics of Odake and Sasaki. A recurrent theme of Haydock’s recursive solution is that the spectral properties of any self-adjoint problem can be mapped onto a corresponding sequence of polynomials (p{sub n}(E)) in energy variable E. The polynomials (p{sub n}(E)) are orthonormal with respect to the density of states n{sub 0}(E) and energy eigenstate |E〉 is the generating function of (p{sub n}(E)). The generality of Haydock’s recursive solution enables one to see the different concepts from a unified perspective and mutually benefiting from each other. Some results obtained within the particular framework of any of (i) to (iv) may have much broader significance.

  13. Southern California Adjoint Source Inversions

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Kim, Y.

    2007-12-01

    Southern California Centroid-Moment Tensor (CMT) solutions with 9 components (6 moment tensor elements, latitude, longitude, and depth) are sought to minimize a misfit function computed from waveform differences. The gradient of a misfit function is obtained based upon two numerical simulations for each earthquake: one forward calculation for the southern California model, and an adjoint calculation that uses time-reversed signals at the receivers. Conjugate gradient and square-root variable metric methods are used to iteratively improve the earthquake source model while reducing the misfit function. The square-root variable metric algorithm has the advantage of providing a direct approximation to the posterior covariance operator. We test the inversion procedure by perturbing each component of the CMT solution, and see how the algorithm converges. Finally, we demonstrate full inversion capabilities using data for real Southern California earthquakes.

  14. A finite-volume Eulerian-Lagrangian localized adjoint method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods for solute transport problems that are dominated by advection. FVELLAM systematically conserves mass globally with all types of boundary conditions. Integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking of characteristic lines intersecting inflow boundaries. FVELLAM extends previous results by obtaining mass conservation locally on Lagrangian space-time elements. -from Authors

  15. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  16. Mesh-free adjoint methods for nonlinear filters

    NASA Astrophysics Data System (ADS)

    Daum, Fred

    2005-09-01

    We apply a new industrial strength numerical approximation, called the "mesh-free adjoint method", to solve the nonlinear filtering problem. This algorithm exploits the smoothness of the problem, unlike particle filters, and hence we expect that mesh-free adjoints are superior to particle filters for many practical applications. The nonlinear filter problem is equivalent to solving the Fokker-Planck equation in real time. The key idea is to use a good adaptive non-uniform quantization of state space to approximate the solution of the Fokker-Planck equation. In particular, the adjoint method computes the location of the nodes in state space to minimize errors in the final answer. This use of an adjoint is analogous to optimal control algorithms, but it is more interesting. The adjoint method is also analogous to importance sampling in particle filters, but it is better for four reasons: (1) it exploits the smoothness of the problem; (2) it explicitly minimizes the errors in the relevant functional; (3) it explicitly models the dynamics in state space; and (4) it can be used to compute a corrected value for the desired functional using the residuals. We will attempt to make this paper accessible to normal engineers who do not have PDEs for breakfast.

  17. Adjoint operator approach to shape design for internal incompressible flows

    NASA Technical Reports Server (NTRS)

    Cabuk, H.; Sung, C.-H.; Modi, V.

    1991-01-01

    The problem of determining the profile of a channel or duct that provides the maximum static pressure rise is solved. Incompressible, laminar flow governed by the steady state Navier-Stokes equations is assumed. Recent advances in computational resources and algorithms have made it possible to solve the direct problem of determining such a flow through a body of known geometry. It is possible to obtain a set of adjoint equations, the solution to which permits the calculation of the direction and relative magnitude of change in the diffuser profile that leads to a higher pressure rise. The solution to the adjoint problem can be shown to represent an artificially constructed flow. This interpretation provides a means to construct numerical solutions to the adjoint equations that do not compromise the fully viscous nature of the problem. The algorithmic and computational aspects of solving the adjoint equations are addressed. The form of these set of equations is similar but not identical to the Navier-Stokes equations. In particular some issues related to boundary conditions and stability are discussed.

  18. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  19. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  20. Extraction of macroscopic and microscopic adjoint concepts using a lattice Boltzmann method and discrete adjoint approach.

    PubMed

    Hekmat, Mohamad Hamed; Mirzaei, Masoud

    2015-01-01

    In the present research, we tried to improve the performance of the lattice Boltzmann (LB) -based adjoint approach by utilizing the mesoscopic inherent of the LB method. In this regard, two macroscopic discrete adjoint (MADA) and microscopic discrete adjoint (MIDA) approaches are used to answer the following two challenging questions. Is it possible to extend the concept of the macroscopic and microscopic variables of the flow field to the corresponding adjoint ones? Further, similar to the conservative laws in the LB method, is it possible to find the comparable conservation equations in the adjoint approach? If so, then a definite framework, similar to that used in the flow solution by the LB method, can be employed in the flow sensitivity analysis by the MIDA approach. This achievement can decrease the implementation cost and coding efforts of the MIDA method in complicated sensitivity analysis problems. First, the MADA and MIDA equations are extracted based on the LB method using the duality viewpoint. Meanwhile, using an elementary case, inverse design of a two-dimensional unsteady Poiseuille flow in a periodic channel with constant body forces, the procedure of analytical evaluation of the adjoint variables is described. The numerical results show that similar correlations between the distribution functions can be seen between the corresponding adjoint ones. Besides, the results are promising, emphasizing the flow field adjoint variables can be evaluated via the adjoint distribution functions. Finally, the adjoint conservative laws are introduced. PMID:25679735

  1. Adjoint Data Assimilative Model Study of the Gulf of Maine Coastal Circulation

    NASA Astrophysics Data System (ADS)

    He, R.; McGillicuddy, D. J.; Lynch, D. R.

    2004-12-01

    Data assimilation (DA) in the coastal ocean can be divided into category of either sequential estimation or variational adjoint. Sequential estimation techniques blend models with observations directly, using a variety of algorithms with which the relative weights of data and model are calculated. Variational adjoint techniques infer model control variables (e.g. parameters, forcing functions, boundary conditions, etc.) that minimize the misfit between observations and predictions. The advantage of the latter techniques over the former is that the resulting model solutions obey model dynamics. In this study, the Gulf of Maine coastal circulation and the material property transport are investigated with the Dartmouth variational adjoint DA modeling system, which assimilates in-situ data via inversion for the unknown sea level elevation at open boundaries. In-situ observations include ADCP currents and coastal sea levels. The adjoint DA model skill is evaluated by the inter-comparisons between modeled and observed drifter trajectories. Excellent model skill is found, demonstrating the utility and effectiveness of the adjoint DA modeling system in bridging in-situ observations with coastal ocean model simulations. Implications of the adjoint DA strategy on the emergent coastal ocean observing systems are discussed.

  2. Solution of the advection-dispersion equation in two dimensions by a finite-volume Eulerian-Lagrangian localized adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1998-01-01

    We extend the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) for solution of the advection-dispersion equation to two dimensions. The method can conserve mass globally and is not limited by restrictions on the size of the grid Peclet or Courant number. Therefore, it is well suited for solution of advection-dominated ground-water solute transport problems. In test problem comparisons with standard finite differences, FVELLAM is able to attain accurate solutions on much coarser space and time grids. On fine grids, the accuracy of the two methods is comparable. A critical aspect of FVELLAM (and all other ELLAMs) is evaluation of the mass storage integral from the preceding time level. In FVELLAM this may be accomplished with either a forward or backtracking approach. The forward tracking approach conserves mass globally and is the preferred approach. The backtracking approach is less computationally intensive, but not globally mass conservative. Boundary terms are systematically represented as integrals in space and time which are evaluated by a common integration scheme in conjunction with forward tracking through time. Unlike the one-dimensional case, local mass conservation cannot be guaranteed, so slight oscillations in concentration can develop, particularly in the vicinity of inflow or outflow boundaries. Published by Elsevier Science Ltd.

  3. Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2015-04-01

    We will present our initial results of global adjoint tomography based on 3D seismic wave simulations which is one of the most challenging examples in seismology in terms of intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated in inversions. Using a spectral-element method, we incorporate full 3D wave propagation in seismic tomography by running synthetic seismograms and adjoint simulations to compute exact sensitivity kernels in realistic 3D background models. We run our global simulations on the Oak Ridge National Laboratory's Cray XK7 "Titan" system taking advantage of the GPU version of the SPECFEM3D_GLOBE package. We have started iterations with initially selected 253 earthquakes within the magnitude range of 5.5 < Mw < 7.0 and numerical simulations having resolution down to ~27 s to invert for a transversely isotropic crust and mantle model using a non-linear conjugate gradient algorithm. The measurements are currently based on frequency-dependent traveltime misfits. We use both minor- and major-arc body and surface waves by running 200 min simulations where inversions are performed with more than 2.6 million measurements. Our initial results after 12 iterations already indicate several prominent features such as enhanced slab (e.g., Hellenic, Japan, Bismarck, Sandwich), plume/hotspot (e.g., the Pacific superplume, Caroline, Yellowstone, Hawaii) images, etc. To improve the resolution and ray coverage, particularly in the lower mantle, our aim is to increase the resolution of numerical simulations first going down to ~17 s and then to ~9 s to incorporate high-frequency body waves in inversions. While keeping track of the progress and illumination of features in our models with a limited data set, we work towards to assimilate all available data in inversions from all seismic networks and earthquakes in the global CMT catalogue.

  4. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  5. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  6. Adjoint Formulation for an Embedded-Boundary Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Murman, Scott M.; Pulliam, Thomas H.

    2004-01-01

    Many problems in aerodynamic design can be characterized by smooth and convex objective functions. This motivates the use of gradient-based algorithms, particularly for problems with a large number of design variables, to efficiently determine optimal shapes and configurations that maximize aerodynamic performance. Accurate and efficient computation of the gradient, however, remains a challenging task. In optimization problems where the number of design variables dominates the number of objectives and flow- dependent constraints, the cost of gradient computations can be significantly reduced by the use of the adjoint method. The problem of aerodynamic optimization using the adjoint method has been analyzed and validated for both structured and unstructured grids. The method has been applied to design problems governed by the potential, Euler, and Navier-Stokes equations and can be subdivided into the continuous and discrete formulations. Giles and Pierce provide a detailed review of both approaches. Most implementations rely on grid-perturbation or mapping procedures during the gradient computation that explicitly couple changes in the surface shape to the volume grid. The solution of the adjoint equation is usually accomplished using the same scheme that solves the governing flow equations. Examples of such code reuse include multistage Runge-Kutta schemes coupled with multigrid, approximate-factorization, line-implicit Gauss-Seidel, and also preconditioned GMRES. The development of the adjoint method for aerodynamic optimization problems on Cartesian grids has been limited. In contrast to implementations on structured and unstructured grids, Cartesian grid methods decouple the surface discretization from the volume grid. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin e t al. developed an adjoint formulation for the TRANAIR code

  7. Generalized uncertainty principle and self-adjoint operators

    SciTech Connect

    Balasubramanian, Venkat; Das, Saurya; Vagenas, Elias C.

    2015-09-15

    In this work we explore the self-adjointness of the GUP-modified momentum and Hamiltonian operators over different domains. In particular, we utilize the theorem by von-Neumann for symmetric operators in order to determine whether the momentum and Hamiltonian operators are self-adjoint or not, or they have self-adjoint extensions over the given domain. In addition, a simple example of the Hamiltonian operator describing a particle in a box is given. The solutions of the boundary conditions that describe the self-adjoint extensions of the specific Hamiltonian operator are obtained.

  8. Towards Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Zhu, H.; Peter, D. B.; Tromp, J.

    2012-12-01

    Seismic tomography is at a stage where we can harness entire seismograms using the opportunities offered by advances in numerical wave propagation solvers and high-performance computing. Adjoint methods provide an efficient way for incorporating full nonlinearity of wave propagation and 3D Fréchet kernels in iterative seismic inversions which have so far given promising results at continental and regional scales. Our goal is to take adjoint tomography forward to image the entire planet. Using an iterative conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. We have started with around 255 global CMT events having moment magnitudes between 5.8 and 7, and used GSN stations as well as some local networks such as USArray, European stations etc. Prior to the structure inversion, we reinvert global CMT solutions by computing Green functions in our 3D reference model to take into account effects of crustal variations on source parameters. Using the advantages of numerical simulations, our strategy is to invert crustal and mantle structure together to avoid any bias introduced into upper-mantle images due to "crustal corrections", which are commonly used in classical tomography. 3D simulations dramatically increase the usable amount of data so that, with the current earthquake-station setup, we perform each iteration with more than two million measurements. Multi-resolution smoothing based on ray density is applied to the gradient to better deal with the imperfect source-station distribution on the globe and extract more information underneath regions with dense ray coverage and vice versa. Similar to frequency domain approach, we reduce nonlinearities by starting from long periods and gradually increase the frequency content of data after successive model updates. To simplify the problem, we primarily focus on the elastic structure and therefore our measurements are based on

  9. Towards efficient backward-in-time adjoint computations using data compression techniques

    DOE PAGESBeta

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less

  10. Towards efficient backward-in-time adjoint computations using data compression techniques

    SciTech Connect

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for the difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.

  11. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  12. An ordinary differential equation based solution path algorithm.

    PubMed

    Wu, Yichao

    2011-01-01

    Efron, Hastie, Johnstone and Tibshirani (2004) proposed Least Angle Regression (LAR), a solution path algorithm for the least squares regression. They pointed out that a slight modification of the LAR gives the LASSO (Tibshirani, 1996) solution path. However it is largely unknown how to extend this solution path algorithm to models beyond the least squares regression. In this work, we propose an extension of the LAR for generalized linear models and the quasi-likelihood model by showing that the corresponding solution path is piecewise given by solutions of ordinary differential equation systems. Our contribution is twofold. First, we provide a theoretical understanding on how the corresponding solution path propagates. Second, we propose an ordinary differential equation based algorithm to obtain the whole solution path. PMID:21532936

  13. Application of adjoint operators to neural learning

    NASA Technical Reports Server (NTRS)

    Barhen, J.; Toomarian, N.; Gulati, S.

    1990-01-01

    A technique for the efficient analytical computation of such parameters of the neural architecture as synaptic weights and neural gain is presented as a single solution of a set of adjoint equations. The learning model discussed concentrates on the adiabatic approximation only. A problem of interest is represented by a system of N coupled equations, and then adjoint operators are introduced. A neural network is formalized as an adaptive dynamical system whose temporal evolution is governed by a set of coupled nonlinear differential equations. An approach based on the minimization of a constrained neuromorphic energylike function is applied, and the complete learning dynamics are obtained as a result of the calculations.

  14. Diagnositcs With Adjoint Modelling

    NASA Astrophysics Data System (ADS)

    Blessing, S.; Fraedrich, K.; Kirk, E.; Lunkeit, F.

    The potential usefulness of an adjoint primitive equations global atmospheric circu- lation model for climate diagnostics is demonstrated in a feasibility study. A daily NAO-type index is calculated as one-point correlation of the 300 hPa streamfunction anomaly. By application of the adjoint model we diagnose its temperature forcing on short timescales in terms of spatial temperature sensitivity patterns at different time lags, which, in a first order approximation, induce growth of the index. The dynamical relevance of these sensitivity patterns is confirmed by lag-correlating the index time series and the projection time series of the model temperature on these sensitivity patterns.

  15. Adjoint affine fusion and tadpoles

    NASA Astrophysics Data System (ADS)

    Urichuk, Andrew; Walton, Mark A.

    2016-06-01

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are written for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.

  16. Optimization of the Direct Discrete Method Using the Solution of the Adjoint Equation and its Application in the Multi-Group Neutron Diffusion Equation

    SciTech Connect

    Ayyoubzadeh, Seyed Mohsen; Vosoughi, Naser

    2011-09-14

    Obtaining the set of algebraic equations that directly correspond to a physical phenomenon has been viable in the recent direct discrete method (DDM). Although this method may find its roots in physical and geometrical considerations, there are still some degrees of freedom that one may suspect optimize-able. Here we have used the information embedded in the corresponding adjoint equation to form a local functional, which in turn by its minimization, yield suitable dual mesh positioning.

  17. Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations

    NASA Astrophysics Data System (ADS)

    Stripling, Hayes Franklin

    Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.

  18. Organization mechanism and counting algorithm on vertex-cover solutions

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Zhang, Renquan; Niu, Baolong; Guo, Binghui; Zheng, Zhiming

    2015-04-01

    Counting the solution number of combinational optimization problems is an important topic in the study of computational complexity, which is concerned with Vertex-Cover in this paper. First, we investigate organizations of Vertex-Cover solution spaces by the underlying connectivity of unfrozen vertices and provide facts on the global and local environment. Then, a Vertex-Cover Solution Number Counting Algorithm is proposed and its complexity analysis is provided, the results of which fit very well with the simulations and have a better performance than those by 1-RSB in the neighborhood of c = e for random graphs. Based on the algorithm, variation and fluctuation on the solution number the statistics are studied to reveal the evolution mechanism of the solution numbers. Furthermore, the marginal probability distributions on the solution space are investigated on both the random graph and scale-free graph to illustrate the different evolution characteristics of their solution spaces. Thus, doing solution number counting based on the graph expression of the solution space should be an alternative and meaningful way to study the hardness of NP-complete and #P-complete problems and the appropriate algorithm design can help to achieve better approximations of solving combinational optimization problems and the corresponding counting problems.

  19. A solution quality assessment method for swarm intelligence optimization algorithms.

    PubMed

    Zhang, Zhaojun; Wang, Gai-Ge; Zou, Kuansheng; Zhang, Jianhua

    2014-01-01

    Nowadays, swarm intelligence optimization has become an important optimization tool and wildly used in many fields of application. In contrast to many successful applications, the theoretical foundation is rather weak. Therefore, there are still many problems to be solved. One problem is how to quantify the performance of algorithm in finite time, that is, how to evaluate the solution quality got by algorithm for practical problems. It greatly limits the application in practical problems. A solution quality assessment method for intelligent optimization is proposed in this paper. It is an experimental analysis method based on the analysis of search space and characteristic of algorithm itself. Instead of "value performance," the "ordinal performance" is used as evaluation criteria in this method. The feasible solutions were clustered according to distance to divide solution samples into several parts. Then, solution space and "good enough" set can be decomposed based on the clustering results. Last, using relative knowledge of statistics, the evaluation result can be got. To validate the proposed method, some intelligent algorithms such as ant colony optimization (ACO), particle swarm optimization (PSO), and artificial fish swarm algorithm (AFS) were taken to solve traveling salesman problem. Computational results indicate the feasibility of proposed method. PMID:25013845

  20. MCNP: Multigroup/adjoint capabilities

    SciTech Connect

    Wagner, J.C.; Redmond, E.L. II; Palmtag, S.P.; Hendricks, J.S.

    1994-04-01

    This report discusses various aspects related to the use and validity of the general purpose Monte Carlo code MCNP for multigroup/adjoint calculations. The increased desire to perform comparisons between Monte Carlo and deterministic codes, along with the ever-present desire to increase the efficiency of large MCNP calculations has produced a greater user demand for the multigroup/adjoint capabilities. To more fully utilize these capabilities, we review the applications of the Monte Carlo multigroup/adjoint method, describe how to generate multigroup cross sections for MCNP with the auxiliary CRSRD code, describe how to use the multigroup/adjoint capability in MCNP, and provide examples and results indicating the effectiveness and validity of the MCNP multigroup/adjoint treatment. This information should assist users in taking advantage of the MCNP multigroup/adjoint capabilities.

  1. Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Toomarian, Nikzad

    1993-01-01

    Electronic neural networks made to synthesize initially unknown mathematical models of time-dependent phenomena or to learn temporally evolving patterns by use of algorithms based on adjoint operators. Algorithms less complicated, involve less computation and solve learning equations forward in time possibly simultaneously with equations of evolution of neural network, thereby both increasing computational efficiency and making real-time applications possible.

  2. A genetic algorithm solution to the unit commitment problem

    SciTech Connect

    Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.

    1996-02-01

    This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 units and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.

  3. Dual of QCD with one adjoint fermion

    SciTech Connect

    Mojaza, Matin; Nardecchia, Marco; Pica, Claudio; Sannino, Francesco

    2011-03-15

    We construct the magnetic dual of QCD with one adjoint Weyl fermion. The dual is a consistent solution of the 't Hooft anomaly matching conditions, allows for flavor decoupling, and remarkably constitutes the first nonsupersymmetric dual valid for any number of colors. The dual allows to bound the anomalous dimension of the Dirac fermion mass operator to be less than one in the conformal window.

  4. Forward and adjoint sensitivity computation of chaotic dynamical systems

    SciTech Connect

    Wang, Qiqi

    2013-02-15

    This paper describes a forward algorithm and an adjoint algorithm for computing sensitivity derivatives in chaotic dynamical systems, such as the Lorenz attractor. The algorithms compute the derivative of long time averaged “statistical” quantities to infinitesimal perturbations of the system parameters. The algorithms are demonstrated on the Lorenz attractor. We show that sensitivity derivatives of statistical quantities can be accurately estimated using a single, short trajectory (over a time interval of 20) on the Lorenz attractor.

  5. Unsteady adjoint of a gas turbine inlet guide vane

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi

    2015-11-01

    Unsteady fluid flow simulations like large eddy simulation have been shown to be crucial in accurately predicting heat transfer in turbomachinery applications like transonic flow over an inlet guide vane. To compute sensitivities of aerothermal objectives for a vane with respect to design parameters an unsteady adjoint is required. In this talk we present unsteady adjoint solutions for a vane from VKI using pressure loss and heat transfer over the vane surface as the objectives. The boundary layer on the suction side near the trailing edge of the vane is turbulent and this poses a challenge for an adjoint solver. The chaotic dynamics cause the adjoint solution to diverge exponentially to infinity from that region when simulated backwards in time. The prospect of adding artificial viscosity to the adjoint equations to dampen the adjoint fields is investigated. Results for the vane from simulations performed on the Titan supercomputer will be shown and the effect of the additional viscosity on the accuracy of the sensitivities will be discussed.

  6. On the Multilevel Solution Algorithm for Markov Chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  7. On the multi-level solution algorithm for Markov chains

    SciTech Connect

    Horton, G.

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  8. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation

  9. A new mathematical adjoint for the modified SAAF-SN equations

    SciTech Connect

    Schunert, Sebastian; Wang, Yaqi; Martineau, Richard; DeHart, Mark D.

    2015-01-01

    We present a new adjoint FEM weak form, which can be directly used for evaluating the mathematical adjoint, suitable for perturbation calculations, of the self-adjoint angular flux SN equations (SAAF-SN) without construction and transposition of the underlying coefficient matrix. Stabilization schemes incorporated in the described SAAF-SN method make the mathematical adjoint distinct from the physical adjoint, i.e. the solution of the continuous adjoint equation with SAAF-SN . This weak form is implemented into RattleSnake, the MOOSE (Multiphysics Object-Oriented Simulation Environment) based transport solver. Numerical results verify the correctness of the implementation and show its utility both for fixed source and eigenvalue problems.

  10. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design

  11. Multigrid methods for bifurcation problems: The self adjoint case

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1987-01-01

    This paper deals with multigrid methods for computational problems that arise in the theory of bifurcation and is restricted to the self adjoint case. The basic problem is to solve for arcs of solutions, a task that is done successfully with an arc length continuation method. Other important issues are, for example, detecting and locating singular points as part of the continuation process, switching branches at bifurcation points, etc. Multigrid methods have been applied to continuation problems. These methods work well at regular points and at limit points, while they may encounter difficulties in the vicinity of bifurcation points. A new continuation method that is very efficient also near bifurcation points is presented here. The other issues mentioned above are also treated very efficiently with appropriate multigrid algorithms. For example, it is shown that limit points and bifurcation points can be solved for directly by a multigrid algorithm. Moreover, the algorithms presented here solve the corresponding problems in just a few work units (about 10 or less), where a work unit is the work involved in one local relaxation on the finest grid.

  12. An algorithm for the numerical solution of linear differential games

    SciTech Connect

    Polovinkin, E S; Ivanov, G E; Balashov, M V; Konstantinov, R V; Khorev, A V

    2001-10-31

    A numerical algorithm for the construction of stable Krasovskii bridges, Pontryagin alternating sets, and also of piecewise program strategies solving two-person linear differential (pursuit or evasion) games on a fixed time interval is developed on the basis of a general theory. The aim of the first player (the pursuer) is to hit a prescribed target (terminal) set by the phase vector of the control system at the prescribed time. The aim of the second player (the evader) is the opposite. A description of numerical algorithms used in the solution of differential games of the type under consideration is presented and estimates of the errors resulting from the approximation of the game sets by polyhedra are presented.

  13. Learning a trajectory using adjoint functions and teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad B.; Barhen, Jacob

    1992-01-01

    A new methodology for faster supervised temporal learning in nonlinear neural networks is presented which builds upon the concept of adjoint operators to allow fast computation of the gradients of an error functional with respect to all parameters of the neural architecture, and exploits the concept of teacher forcing to incorporate information on the desired output into the activation dynamics. The importance of the initial or final time conditions for the adjoint equations is discussed. A new algorithm is presented in which the adjoint equations are solved simultaneously (i.e., forward in time) with the activation dynamics of the neural network. We also indicate how teacher forcing can be modulated in time as learning proceeds. The results obtained show that the learning time is reduced by one to two orders of magnitude with respect to previously published results, while trajectory tracking is significantly improved. The proposed methodology makes hardware implementation of temporal learning attractive for real-time applications.

  14. Self-adaptive closed constrained solution algorithms for nonlinear conduction

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1982-01-01

    Self-adaptive solution algorithms are developed for nonlinear heat conduction problems encountered in analyzing materials for use in high temperature or cryogenic conditions. The nonlinear effects are noted to occur due to convection and radiation effects, as well as temperature-dependent properties of the materials. Incremental successive substitution (ISS) and Newton-Raphson (NR) procedures are treated as extrapolation schemes which have solution projections bounded by a hyperline with an externally applied thermal load vector arising from internal heat generation and boundary conditions. Closed constraints are formulated which improve the efficiency and stability of the procedures by employing closed ellipsoidal surfaces to control the size of successive iterations. Governing equations are defined for nonlinear finite element models, and comparisons are made of results using the the new method and the ISS and NR schemes for epoxy, PVC, and CuGe.

  15. From analytical solutions of solute transport equations to multidimensional time-domain random walk (TDRW) algorithms

    NASA Astrophysics Data System (ADS)

    Bodin, Jacques

    2015-03-01

    In this study, new multi-dimensional time-domain random walk (TDRW) algorithms are derived from approximate one-dimensional (1-D), two-dimensional (2-D), and three-dimensional (3-D) analytical solutions of the advection-dispersion equation and from exact 1-D, 2-D, and 3-D analytical solutions of the pure-diffusion equation. These algorithms enable the calculation of both the time required for a particle to travel a specified distance in a homogeneous medium and the mass recovery at the observation point, which may be incomplete due to 2-D or 3-D transverse dispersion or diffusion. The method is extended to heterogeneous media, represented as a piecewise collection of homogeneous media. The particle motion is then decomposed along a series of intermediate checkpoints located on the medium interface boundaries. The accuracy of the multi-dimensional TDRW method is verified against (i) exact analytical solutions of solute transport in homogeneous media and (ii) finite-difference simulations in a synthetic 2-D heterogeneous medium of simple geometry. The results demonstrate that the method is ideally suited to purely diffusive transport and to advection-dispersion transport problems dominated by advection. Conversely, the method is not recommended for highly dispersive transport problems because the accuracy of the advection-dispersion TDRW algorithms degrades rapidly for a low Péclet number, consistent with the accuracy limit of the approximate analytical solutions. The proposed approach provides a unified methodology for deriving multi-dimensional time-domain particle equations and may be applicable to other mathematical transport models, provided that appropriate analytical solutions are available.

  16. An algorithm for enforcement of contact constraints in quasistatic applications using matrix-free solution algorithms

    SciTech Connect

    Heinstein, M.W.

    1997-10-01

    A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.

  17. A Posteriori Analysis for Hydrodynamic Simulations Using Adjoint Methodologies

    SciTech Connect

    Woodward, C S; Estep, D; Sandelin, J; Wang, H

    2009-02-26

    This report contains results of analysis done during an FY08 feasibility study investigating the use of adjoint methodologies for a posteriori error estimation for hydrodynamics simulations. We developed an approach to adjoint analysis for these systems through use of modified equations and viscosity solutions. Targeting first the 1D Burgers equation, we include a verification of the adjoint operator for the modified equation for the Lax-Friedrichs scheme, then derivations of an a posteriori error analysis for a finite difference scheme and a discontinuous Galerkin scheme applied to this problem. We include some numerical results showing the use of the error estimate. Lastly, we develop a computable a posteriori error estimate for the MAC scheme applied to stationary Navier-Stokes.

  18. The development of solution algorithms for compressible flows

    NASA Astrophysics Data System (ADS)

    Slack, David Christopher

    Three main topics were examined. The first is the development and comparison of time integration schemes on 2-D unstructured meshes. Both explicit and implicit solution grids are presented. Cell centered and cell vertex finite volume upwind schemes using Roe's approximate Riemann solver are developed. The second topic involves an interactive adaptive remeshing algorithm which uses a frontal grid generator and is compared to a single grid calculation. The final topic examined is the capabilities developed for a structured 3-D code called GASP. The capabilities include: generalized chemistry and thermodynamic modeling, space marching, memory management through the use of binary C I/O, and algebraic and two equation eddy viscosity turbulence modeling. Results are given for Mach 1.7 3-D analytic forebody, a Mach 1.38 axisymmetric nozzle with hydrogen-air combustion, a Mach 14.15 deg ramp, and Mach 0.3 viscous flow over a flat plate.

  19. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach

  20. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  1. Applicability of the Newman-Janis algorithm to black hole solutions of modified gravity theories

    NASA Astrophysics Data System (ADS)

    Hansen, Devin; Yunes, Nicolás

    2013-11-01

    The Newman-Janis algorithm has been widely used to construct rotating black hole solutions from nonrotating counterparts. While this algorithm was developed within general relativity (GR), it has more recently been applied to nonrotating solutions in modified gravity theories. We find that the application of the Newman-Janis algorithm to an arbitrary non-GR spherically symmetric solution introduces pathologies in the resulting axially symmetric metric. This then establishes that, in general, the Newman-Janis algorithm should not be used to construct rotating black hole solutions outside of General Relativity.

  2. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat

  3. ANISORROPIA: the adjoint of the aerosol thermodynamic model ISORROPIA

    NASA Astrophysics Data System (ADS)

    Capps, S. L.; Henze, D. K.; Hakami, A.; Russell, A. G.; Nenes, A.

    2011-08-01

    We present the development of ANISORROPIA, the discrete adjoint of the ISORROPIA thermodynamic equilibrium model that treats the Na+-SO42--HSO4--NH4+-NO3--Cl--H2O aerosol system, and we demonstrate its sensitivity analysis capabilities. ANISORROPIA calculates sensitivities of an inorganic species in aerosol or gas phase with respect to the total concentrations of each species present with only a two-fold increase in computational time over the forward model execution. Due to the highly nonlinear and discontinuous solution surface of ISORROPIA, evaluation of the adjoint required a new, complex-variable version of the the model, which determines first-order sensitivities with machine precision and avoids cancellation errors arising from finite difference calculations. The adjoint is verified over an atmospherically relevant range of concentrations, temperature, and relative humidity. We apply ANISORROPIA to recent field campaign results from Atlanta, GA, USA, and Mexico City, Mexico, to characterize the inorganic aerosol sensitivities of these distinct urban air masses. The variability in the relationship between PM2.5 mass and precursor concentrations shown has important implications for air quality and climate. ANISORROPIA enables efficient elucidation of aerosol concentration dependence on aerosol precursor emissions in the context of atmospheric chemical transport model adjoints.

  4. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  5. Self-adjointness of deformed unbounded operators

    SciTech Connect

    Much, Albert

    2015-09-15

    We consider deformations of unbounded operators by using the novel construction tool of warped convolutions. By using the Kato-Rellich theorem, we show that unbounded self-adjoint deformed operators are self-adjoint if they satisfy a certain condition. This condition proves itself to be necessary for the oscillatory integral to be well-defined. Moreover, different proofs are given for self-adjointness of deformed unbounded operators in the context of quantum mechanics and quantum field theory.

  6. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  7. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  8. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  9. Evolutionary Algorithms Approach to the Solution of Damage Detection Problems

    NASA Astrophysics Data System (ADS)

    Salazar Pinto, Pedro Yoajim; Begambre, Oscar

    2010-09-01

    In this work is proposed a new Self-Configured Hybrid Algorithm by combining the Particle Swarm Optimization (PSO) and a Genetic Algorithm (GA). The aim of the proposed strategy is to increase the stability and accuracy of the search. The central idea is the concept of Guide Particle, this particle (the best PSO global in each generation) transmits its information to a particle of the following PSO generation, which is controlled by the GA. Thus, the proposed hybrid has an elitism feature that improves its performance and guarantees the convergence of the procedure. In different test carried out in benchmark functions, reported in the international literature, a better performance in stability and accuracy was observed; therefore the new algorithm was used to identify damage in a simple supported beam using modal data. Finally, it is worth noting that the algorithm is independent of the initial definition of heuristic parameters.

  10. Adjoint calculations for multiple scattering of Compton and Rayleigh effects

    NASA Astrophysics Data System (ADS)

    Fernández, J. E.; Sumini, M.

    1992-08-01

    As is well known, the experimental determination of the Compton profile requires a particular geometry with a scattering angle close to π. That situation involves a narrow multiple-scattering spectrum that overlaps the Compton peak, making it difficult to analyze the different contributions to the profile. We show how the solution of the adjoint problem can help in devising more useful experimental configurations, giving, through its classical "importance" meaning, a formally clear picture of the whole problem.

  11. ASYMPTOTICALLY OPTIMAL HIGH-ORDER ACCURATE ALGORITHMS FOR THE SOLUTION OF CERTAIN ELLIPTIC PDEs

    SciTech Connect

    Leonid Kunyansky, PhD

    2008-11-26

    The main goal of the project, "Asymptotically Optimal, High-Order Accurate Algorithms for the Solution of Certain Elliptic PDE's" (DE-FG02-03ER25577) was to develop fast, high-order algorithms for the solution of scattering problems and spectral problems of photonic crystals theory. The results we obtained lie in three areas: (1) asymptotically fast, high-order algorithms for the solution of eigenvalue problems of photonics, (2) fast, high-order algorithms for the solution of acoustic and electromagnetic scattering problems in the inhomogeneous media, and (3) inversion formulas and fast algorithms for the inverse source problem for the acoustic wave equation, with applications to thermo- and opto- acoustic tomography.

  12. A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials

    SciTech Connect

    Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A

    2008-12-04

    We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.

  13. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  14. Fast algorithm for the solution of large-scale non-negativity constrained least squares problems.

    SciTech Connect

    Van Benthem, Mark Hilary; Keenan, Michael Robert

    2004-06-01

    Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.

  15. An algorithm for the systematic disturbance of optimal rotational solutions

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kaiser, Mary K.

    1989-01-01

    An algorithm for introducing a systematic rotational disturbance into an optimal (i.e., single axis) rotational trajectory is described. This disturbance introduces a motion vector orthogonal to the quaternion-defined optimal rotation axis. By altering the magnitude of this vector, the degree of non-optimality can be controlled. The metric properties of the distortion parameter are described, with analogies to two-dimensional translational motion. This algorithm was implemented in a motion-control program on a three-dimensional graphic workstation. It supports a series of human performance studies on the detectability of rotational trajectory optimality by naive observers.

  16. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  17. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  18. Neural Networks Art: Solving Problems with Multiple Solutions and New Teaching Algorithm

    PubMed Central

    Dmitrienko, V. D; Zakovorotnyi, A. Yu.; Leonov, S. Yu.; Khavina, I. P

    2014-01-01

    A new discrete neural networks adaptive resonance theory (ART), which allows solving problems with multiple solutions, is developed. New algorithms neural networks teaching ART to prevent degradation and reproduction classes at training noisy input data is developed. Proposed learning algorithms discrete ART networks, allowing obtaining different classification methods of input. PMID:25246988

  19. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

    PubMed Central

    Yurtkuran, Alkın

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  20. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature. PMID:26819591

  1. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    SciTech Connect

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.

  2. ANISORROPIA: the adjoint of the aerosol thermodynamic model ISORROPIA

    NASA Astrophysics Data System (ADS)

    Capps, S. L.; Henze, D. K.; Hakami, A.; Russell, A. G.; Nenes, A.

    2012-01-01

    We present the development of ANISORROPIA, the discrete adjoint of the ISORROPIA thermodynamic equilibrium model that treats the Na+-SO42-- HSO4--NH4+ -NO3--Cl--H2O aerosol system, and we demonstrate its sensitivity analysis capabilities. ANISORROPIA calculates sensitivities of an inorganic species in aerosol or gas phase with respect to the total concentrations of each species present with less than a two-fold increase in computational time over the concentration calculations. Due to the highly nonlinear and discontinuous solution surface of ISORROPIA, evaluation of the adjoint required a new, complex-variable version of the model, which determines first-order sensitivities with machine precision and avoids cancellation errors arising from finite difference calculations. The adjoint is verified over an atmospherically relevant range of concentrations, temperature, and relative humidity. We apply ANISORROPIA to recent field campaign results from Atlanta, GA, USA, and Mexico City, Mexico, to characterize the inorganic aerosol sensitivities of these distinct urban air masses. The variability in the relationship between fine mode inorganic aerosol mass and precursor concentrations shown has important implications for air quality and climate.

  3. An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents

    ERIC Educational Resources Information Center

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2013-01-01

    Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…

  4. ELASTIC NET FOR COX'S PROPORTIONAL HAZARDS MODEL WITH A SOLUTION PATH ALGORITHM.

    PubMed

    Wu, Yichao

    2012-01-01

    For least squares regression, Efron et al. (2004) proposed an efficient solution path algorithm, the least angle regression (LAR). They showed that a slight modification of the LAR leads to the whole LASSO solution path. Both the LAR and LASSO solution paths are piecewise linear. Recently Wu (2011) extended the LAR to generalized linear models and the quasi-likelihood method. In this work we extend the LAR further to handle Cox's proportional hazards model. The goal is to develop a solution path algorithm for the elastic net penalty (Zou and Hastie (2005)) in Cox's proportional hazards model. This goal is achieved in two steps. First we extend the LAR to optimizing the log partial likelihood plus a fixed small ridge term. Then we define a path modification, which leads to the solution path of the elastic net regularized log partial likelihood. Our solution path is exact and piecewise determined by ordinary differential equation systems. PMID:23226932

  5. Comparative study of fusion algorithms and implementation of new efficient solution

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Snoussi, Hichem; Siala, Mohamed; Abdelkefi, Fatma

    2014-05-01

    High Dynamic Range (HDR) imaging has been the subject of significant researches over the past years, the goal of acquiring the best cinema-quality HDR images of fast-moving scenes using an efficient merging algorithm has not yet been achieved. In fact, through the years, many efficient algorithms have been implemented and developed. However, they are not yet efficient since they don't treat all the situations and they have not enough speed to ensure fast HDR image reconstitution. In this paper, we will present a full comparative analyze and study of the available fusion algorithms. Also, we will implement our personal algorithm which can be more optimized and faster than the existed ones. We will also present our investigated algorithm that has the advantage to be more optimized than the existing ones. This merging algorithm is related to our hardware solution allowing us to obtain four pictures with different exposures.

  6. A new algorithm for generating highly accurate benchmark solutions to transport test problems

    SciTech Connect

    Azmy, Y.Y.

    1997-06-01

    We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.

  7. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2014-10-01

    The tangent linear and adjoint model (TAM) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristics vectors. TAM is also required by the 4-D-VAR algorithm which is one of the major method in Data Assimilation. This paper describes the development and the validation of the Tangent linear and Adjoint Model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed and several applications are also presented.

  8. Solution algorithms for the two-dimensional Euler equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Whitaker, D. L.; Slack, David C.; Walters, Robert W.

    1990-01-01

    The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.

  9. Finite element solution for energy conservation using a highly stable explicit integration algorithm

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Manhardt, P. D.

    1972-01-01

    Theoretical derivation of a finite element solution algorithm for the transient energy conservation equation in multidimensional, stationary multi-media continua with irregular solution domain closure is considered. The complete finite element matrix forms for arbitrarily irregular discretizations are established, using natural coordinate function representations. The algorithm is embodied into a user-oriented computer program (COMOC) which obtains transient temperature distributions at the node points of the finite element discretization using a highly stable explicit integration procedure with automatic error control features. The finite element algorithm is shown to posses convergence with discretization for a transient sample problem. The condensed form for the specific heat element matrix is shown to be preferable to the consistent form. Computed results for diverse problems illustrate the versatility of COMOC, and easily prepared output subroutines are shown to allow quick engineering assessment of solution behavior.

  10. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    NASA Astrophysics Data System (ADS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-03-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space-time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge-Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that its

  11. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    SciTech Connect

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-03-15

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  12. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1971-01-01

    Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  13. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations.

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1973-01-01

    Tridiagonal linear systems of equations can be solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computation on computers of the Illiac IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log(sub-2) N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  14. Adjoint tomography of the Middle East

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Savage, B.; Rodgers, A. J.; Tromp, J.

    2010-12-01

    Improvements in nuclear explosion monitoring require refined seismic models of the target region. In our study, we focus on the Middle East, spanning a region from Turkey to the west and West India to the east. This area represents a complex geologic and tectonic setting with sparse seismic data coverage. This has lead to diverging interpretations of crustal and underlying upper-mantle structure by different research groups, complicating seismic monitoring of the Middle East at regional distances. We evaluated an initial 3D seismic model of this region by computing full waveforms for several regional earthquakes by a spectral-element method. We measure traveltime and multitaper phase shifts between observed broadband data and synthetic seismograms for distinct seismic phases within selected time windows using a recently developed automated measurement algorithm. Based on the remaining misfits, we setup an iterative inversion procedure for a fully numerical 3D seismic tomography approach. In order to improve the initial 3D seismic model, the sensitivity to seismic structure of the traveltime and multitaper phase measurements for all available seismic network recordings is computed. As this represents a computationally very intensive task, we take advantage of a fully numerical adjoint approach by using the efficient software package SPECFEM3D_GLOBE on a dedicated cluster. We show examples of such sensitivity kernels for different seismic events and use them in a steepest descent approach to update the 3D seismic model, starting at longer periods between 60 s and up to 200 s and moving towards shorter periods of 11 s. We highlight various improvements in the initial seismic structure during the iterations in order to better fit regional seismic waveforms in the Middle East.

  15. Adjoint tomography of the Middle East

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Savage, B.; Rodgers, A.; Morency, C.; Tromp, J.

    2011-12-01

    Improvements in nuclear explosion monitoring require refined seismic models of the target region. In our study, we focus on the Middle East, spanning a region from Turkey to the west and West India to the east. This area represents a complex geologic and tectonic setting with sparse seismic data coverage. This has lead to diverging interpretations of crustal and underlying upper-mantle structure by different research groups, complicating seismic monitoring of the Middle East at regional distances. We evaluated an initial 3D seismic model of this region by computing full waveforms for several regional earthquakes based on a spectral-element method. We measure traveltime and multitaper phase differences between observed broadband data and synthetic seismograms for distinct seismic phases within selected time windows using a recently developed automated measurement algorithm. Based on the remaining misfits, we setup an iterative inversion procedure for a fully numerical 3D seismic tomography approach. In order to improve the initial 3D seismic model, sensitivity to seismic structures of traveltime and multitaper phase measurements for all available seismic network recordings is computed. As this represents a computationally very intensive task, we take advantage of a fully numerical adjoint approach by using the efficient software package SPECFEM3D_GLOBE on a dedicated cluster. We show examples of such sensitivity kernels for different seismic events. All these `event kernels' are then summed, smoothed and further used in a preconditioned conjugate-gradient approach. Thus we iteratively update the 3D seismic model, starting at longer periods between 60~s and up to 150~s and moving towards shorter periods of 11~s. We highlight various improvements in the initial seismic structure during the iterations in order to better fit regional seismic waveforms in the Middle East.

  16. Double-difference adjoint seismic tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Yanhua O.; Simons, Frederik J.; Tromp, Jeroen

    2016-06-01

    We introduce a `double-difference' method for the inversion for seismic wavespeed structure based on adjoint tomography. Differences between seismic observations and model predictions at individual stations may arise from factors other than structural heterogeneity, such as errors in the assumed source-time function, inaccurate timings, and systematic uncertainties. To alleviate the corresponding nonuniqueness in the inverse problem, we construct differential measurements between stations, thereby reducing the influence of the source signature and systematic errors. We minimize the discrepancy between observations and simulations in terms of the differential measurements made on station pairs. We show how to implement the double-difference concept in adjoint tomography, both theoretically and in practice. We compare the sensitivities of absolute and differential measurements. The former provide absolute information on structure along the ray paths between stations and sources, whereas the latter explain relative (and thus higher-resolution) structural variations in areas close to the stations. Whereas in conventional tomography a measurement made on a single earthquake-station pair provides very limited structural information, in double-difference tomography one earthquake can actually resolve significant details of the structure. The double-difference methodology can be incorporated into the usual adjoint tomography workflow by simply pairing up all conventional measurements; the computational cost of the necessary adjoint simulations is largely unaffected. Rather than adding to the computational burden, the inversion of double-difference measurements merely modifies the construction of the adjoint sources for data assimilation.

  17. Numerical solutions of the reaction diffusion system by using exponential cubic B-spline collocation algorithms

    NASA Astrophysics Data System (ADS)

    Ersoy, Ozlem; Dag, Idris

    2015-12-01

    The solutions of the reaction-diffusion system are given by method of collocation based on the exponential B-splines. Thus the reaction-diffusion systemturns into an iterative banded algebraic matrix equation. Solution of the matrix equation is carried out byway of Thomas algorithm. The present methods test on both linear and nonlinear problems. The results are documented to compare with some earlier studies by use of L∞ and relative error norm for problems respectively.

  18. Adjoint-based optimization for understanding and suppressing jet noise

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan B.

    2011-08-01

    Advanced simulation tools, particularly large-eddy simulation techniques, are becoming capable of making quality predictions of jet noise for realistic nozzle geometries and at engineering relevant flow conditions. Increasing computer resources will be a key factor in improving these predictions still further. Quality prediction, however, is only a necessary condition for the use of such simulations in design optimization. Predictions do not themselves lead to quieter designs. They must be interpreted or harnessed in some way that leads to design improvements. As yet, such simulations have not yielded any simplifying principals that offer general design guidance. The turbulence mechanisms leading to jet noise remain poorly described in their complexity. In this light, we have implemented and demonstrated an aeroacoustic adjoint-based optimization technique that automatically calculates gradients that point the direction in which to adjust controls in order to improve designs. This is done with only a single flow solutions and a solution of an adjoint system, which is solved at computational cost comparable to that for the flow. Optimization requires iterations, but having the gradient information provided via the adjoint accelerates convergence in a manner that is insensitive to the number of parameters to be optimized. This paper, which follows from a presentation at the 2010 IUTAM Symposium on Computational Aero-Acoustics for Aircraft Noise Prediction, reviews recent and ongoing efforts by the author and co-workers. It provides a new formulation of the basic approach and demonstrates the approach on a series of model flows, culminating with a preliminary result for a turbulent jet.

  19. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local

  20. Comparison of Ensemble and Adjoint Approaches to Variational Optimization of Observational Arrays

    NASA Astrophysics Data System (ADS)

    Nechaev, D.; Panteleev, G.; Yaremchuk, M.

    2015-12-01

    Comprehensive monitoring of the circulation in the Chukchi Sea and Bering Strait is one of the key prerequisites of the successful long-term forecast of the Arctic Ocean state. Since the number of continuously maintained observational platforms is restricted by logistical and political constraints, the configuration of such an observing system should be guided by an objective strategy that optimizes the observing system coverage, design, and the expenses of monitoring. The presented study addresses optimization of system consisting of a limited number of observational platforms with respect to reduction of the uncertainties in monitoring the volume/freshwater/heat transports through a set of key sections in the Chukchi Sea and Bering Strait. Variational algorithms for optimization of observational arrays are verified in the test bed of the set of 4Dvar optimized summer-fall circulations in the Pacific sector of the Arctic Ocean. The results of an optimization approach based on low-dimensional ensemble of model solutions is compared against a more conventional algorithm involving application of the tangent linear and adjoint models. Special attention is paid to the computational efficiency and portability of the optimization procedure.

  1. A general algorithm for the solution of Kepler's equation for elliptic orbits

    NASA Technical Reports Server (NTRS)

    Ng, E. W.

    1979-01-01

    An efficient algorithm is presented for the solution of Kepler's equation f(E)=E-M-e sin E=0, where e is the eccentricity, M the mean anomaly and E the eccentric anomaly. This algorithm is based on simple initial approximations that are cubics in M, and an iterative scheme that is a slight generalization of the Newton-Raphson method. Extensive testing of this algorithm has been performed on the UNIVAC 1108 computer. Solutions for 20,000 pairs of values of e and M show that for single precision, 42.0% of the cases require one iteration, 57.8% two and 0.2% three. For double precision one additional iteration is required.

  2. A generalized front marching algorithm for the solution of the eikonal equation

    NASA Astrophysics Data System (ADS)

    Covello, Paul; Rodrigue, Garry

    2003-07-01

    A new front marching algorithm for solving the eikonal equation is presented. An important property of the algorithm is that it can be used on nodes that are located on highly distorted grids or on nodes that are randomly located. When the nodes are located on an orthogonal grid, the method is first-order accurate and is shown to be a generalization of the front marching algorithm in (Proc. Natl. Acad. Sci. 93 (4) (1996) 1591). The accuracy of the method is also shown to be dependent on the principle curvature of the wave front solution. Numerical results on a variety of node configurations as well as on shadow, nonconvex and nondifferentiable solutions are presented.

  3. ADGEN: ADjoint GENerator for computer models

    SciTech Connect

    Worley, B.A.; Pin, F.G.; Horwedel, J.E.; Oblow, E.M.

    1989-05-01

    This paper presents the development of a FORTRAN compiler and an associated supporting software library called ADGEN. ADGEN reads FORTRAN models as input and produces and enhanced version of the input model. The enhanced version reproduces the original model calculations but also has the capability to calculate derivatives of model results of interest with respect to any and all of the model data and input parameters. The method for calculating the derivatives and sensitivities is the adjoint method. Partial derivatives are calculated analytically using computer calculus and saved as elements of an adjoint matrix on direct assess storage. The total derivatives are calculated by solving an appropriate adjoint equation. ADGEN is applied to a major computer model of interest to the Low-Level Waste Community, the PRESTO-II model. PRESTO-II sample problem results reveal that ADGEN correctly calculates derivatives of response of interest with respect to 300 parameters. The execution time to create the adjoint matrix is a factor of 45 times the execution time of the reference sample problem. Once this matrix is determined, the derivatives with respect to 3000 parameters are calculated in a factor of 6.8 that of the reference model for each response of interest. For a single 3000 for determining these derivatives by parameter perturbations. The automation of the implementation of the adjoint technique for calculating derivatives and sensitivities eliminates the costly and manpower-intensive task of direct hand-implementation by reprogramming and thus makes the powerful adjoint technique more amenable for use in sensitivity analysis of existing models. 20 refs., 1 fig., 5 tabs.

  4. FAST TRACK COMMUNICATION Quasi self-adjoint nonlinear wave equations

    NASA Astrophysics Data System (ADS)

    Ibragimov, N. H.; Torrisi, M.; Tracinà, R.

    2010-11-01

    In this paper we generalize the classification of self-adjoint second-order linear partial differential equation to a family of nonlinear wave equations with two independent variables. We find a class of quasi self-adjoint nonlinear equations which includes the self-adjoint linear equations as a particular case. The property of a differential equation to be quasi self-adjoint is important, e.g. for constructing conservation laws associated with symmetries of the differential equation.

  5. Adjoint-Based Uncertainty Quantification with MCNP

    SciTech Connect

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  6. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    PubMed Central

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  7. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  8. A deterministic annealing algorithm for approximating a solution of the min-bisection problem.

    PubMed

    Dang, Chuangyin; Ma, Wei; Liang, Jiye

    2009-01-01

    The min-bisection problem is an NP-hard combinatorial optimization problem. In this paper an equivalent linearly constrained continuous optimization problem is formulated and an algorithm is proposed for approximating its solution. The algorithm is derived from the introduction of a logarithmic-cosine barrier function, where the barrier parameter behaves as temperature in an annealing procedure and decreases from a sufficiently large positive number to zero. The algorithm searches for a better solution in a feasible descent direction, which has a desired property that lower and upper bounds are always satisfied automatically if the step length is a number between zero and one. We prove that the algorithm converges to at least a local minimum point of the problem if a local minimum point of the barrier problem is generated for a sequence of descending values of the barrier parameter with a limit of zero. Numerical results show that the algorithm is much more efficient than two of the best existing heuristic methods for the min-bisection problem, Kernighan-Lin method with multiple starting points (MSKL) and multilevel graph partitioning scheme (MLGP). PMID:18995985

  9. The finite state projection algorithm for the solution of the chemical master equation.

    PubMed

    Munsky, Brian; Khammash, Mustafa

    2006-01-28

    This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods. PMID:16460146

  10. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2016-06-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7 ) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here

  11. An algorithm for constructing polynomial systems whose solution space characterizes quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Severyanov, Vasily M.

    2006-05-01

    An algorithm and its first implementation in C# are presented for assembling arbitrary quantum circuits on the base of Hadamard and Toffoli gates and for constructing multivariate polynomial systems over the finite field Z II arising when applying the Feynman's sum-over-paths approach to quantum circuits. The matrix elements determined by a circuit can be computed by counting the number of common roots in Z II for the polynomial system associated with the circuit. To determine the number of solutions in Z II for the output polynomial system, one can use the Grobner bases method and the relevant algorithms for computing Grobner bases.

  12. Solution algorithms for non-linear singularly perturbed optimal control problems

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.

    1983-01-01

    The applicability and usefulness of several classical and other methods for solving the two-point boundary-value problem which arises in non-linear singularly perturbed optimal control are assessed. Specific algorithms of the Picard, Newton and averaging types are formally developed for this class of problem. The computational requirements associated with each algorithm are analysed and compared with the computational requirement of the method of matched asymptotic expansions. Approximate solutions to a linear and a non-linear problem are obtained by each method and compared.

  13. Stochastic coalescence in finite systems: an algorithm for the numerical solution of the multivariate master equation.

    NASA Astrophysics Data System (ADS)

    Alfonso, Lester; Zamora, Jose; Cruz, Pedro

    2015-04-01

    The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.

  14. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  15. Parametric effects of CFL number and artificial smoothing on numerical solutions using implicit approximate factorization algorithm

    NASA Technical Reports Server (NTRS)

    Daso, E. O.

    1986-01-01

    An implicit approximate factorization algorithm is employed to quantify the parametric effects of Courant number and artificial smoothing on numerical solutions of the unsteady 3-D Euler equations for a windmilling propeller (low speed) flow field. The results show that propeller global or performance chracteristics vary strongly with Courant number and artificial dissipation parameters, though the variation is such less severe at high Courant numbers. Candidate sets of Courant number and dissipation parameters could result in parameter-dependent solutions. Parameter-independent numerical solutions can be obtained if low values of the dissipation parameter-time step ratio are used in the computations. Furthermore, it is realized that too much artificial damping can degrade numerical stability. Finally, it is demonstrated that highly resolved meshes may, in some cases, delay convergence, thereby suggesting some optimum cell size for a given flow solution. It is suspected that improper boundary treatment may account for the cell size constraint.

  16. Investigation of ALEGRA shock hydrocode algorithms using an exact free surface jet flow solution.

    SciTech Connect

    Hanks, Bradley Wright.; Robinson, Allen Conrad

    2014-01-01

    Computational testing of the arbitrary Lagrangian-Eulerian shock physics code, ALEGRA, is presented using an exact solution that is very similar to a shaped charge jet flow. The solution is a steady, isentropic, subsonic free surface flow with significant compression and release and is provided as a steady state initial condition. There should be no shocks and no entropy production throughout the problem. The purpose of this test problem is to present a detailed and challenging computation in order to provide evidence for algorithmic strengths and weaknesses in ALEGRA which should be examined further. The results of this work are intended to be used to guide future algorithmic improvements in the spirit of test-driven development processes.

  17. Dynamics analysis of electrodynamic satellite tethers. Equations of motion and numerical solution algorithms for the tether

    NASA Technical Reports Server (NTRS)

    Nacozy, P. E.

    1984-01-01

    The equations of motion are developed for a perfectly flexible, inelastic tether with a satellite at its extremity. The tether is attached to a space vehicle in orbit. The tether is allowed to possess electrical conductivity. A numerical solution algorithm to provide the motion of the tether and satellite system is presented. The resulting differential equations can be solved by various existing standard numerical integration computer programs. The resulting differential equations allow the introduction of approximations that can lead to analytical, approximate general solutions. The differential equations allow more dynamical insight of the motion.

  18. The astrometric core solution for the Gaia mission. Overview of models, algorithms, and software implementation

    NASA Astrophysics Data System (ADS)

    Lindegren, L.; Lammers, U.; Hobbs, D.; O'Mullane, W.; Bastian, U.; Hernández, J.

    2012-02-01

    Context. The Gaia satellite will observe about one billion stars and other point-like sources. The astrometric core solution will determine the astrometric parameters (position, parallax, and proper motion) for a subset of these sources, using a global solution approach which must also include a large number of parameters for the satellite attitude and optical instrument. The accurate and efficient implementation of this solution is an extremely demanding task, but crucial for the outcome of the mission. Aims: We aim to provide a comprehensive overview of the mathematical and physical models applicable to this solution, as well as its numerical and algorithmic framework. Methods: The astrometric core solution is a simultaneous least-squares estimation of about half a billion parameters, including the astrometric parameters for some 100 million well-behaved so-called primary sources. The global nature of the solution requires an iterative approach, which can be broken down into a small number of distinct processing blocks (source, attitude, calibration and global updating) and auxiliary processes (including the frame rotator and selection of primary sources). We describe each of these processes in some detail, formulate the underlying models, from which the observation equations are derived, and outline the adopted numerical solution methods with due consideration of robustness and the structure of the resulting system of equations. Appendices provide brief introductions to some important mathematical tools (quaternions and B-splines for the attitude representation, and a modified Cholesky algorithm for positive semidefinite problems) and discuss some complications expected in the real mission data. Results: A complete software system called AGIS (Astrometric Global Iterative Solution) is being built according to the methods described in the paper. Based on simulated data for 2 million primary sources we present some initial results, demonstrating the basic

  19. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  20. A join algorithm for combining AND parallel solutions in AND/OR parallel systems

    SciTech Connect

    Ramkumar, B. ); Kale, L.V. )

    1992-02-01

    When two or more literals in the body of a Prolog clause are solved in (AND) parallel, their solutions need to be joined to compute solutions for the clause. This is often a difficult problem in parallel Prolog systems that exploit OR and independent AND parallelism in Prolog programs. In several AND/OR parallel systems proposed recently, this problem is side-stepped at the cost of unexploited OR parallelism in the program, in part due to the complexity of the backtracking algorithm beneath AND parallel branches. In some cases, the data dependency graphs used by these systems cannot represent all the exploitable independent AND parallelism known at compile time. In this paper, we describe the compile time analysis for an optimized join algorithm for supporting independent AND parallelism in logic programs efficiently without leaving and OR parallelism unexploited. We then discuss how this analysis can be used to yield very efficient runtime behavior. We also discuss problems associated with a tree representation of the search space when arbitrarily complex data dependency graphs are permitted. We describe how these problems can be resolved by mapping the search space onto data dependency graphs themselves. The algorithm has been implemented in a compiler for parallel Prolog based on the reduce-OR process model. The algorithm is suitable for the implementation of AND/OR systems on both shared and nonshared memory machines. Performance on benchmark programs.

  1. Implementation of a block Lanczos algorithm for Eigenproblem solution of gyroscopic systems

    NASA Technical Reports Server (NTRS)

    Gupta, Kajal K.; Lawson, Charles L.

    1987-01-01

    The details of implementation of a general numerical procedure developed for the accurate and economical computation of natural frequencies and associated modes of any elastic structure rotating along an arbitrary axis are described. A block version of the Lanczos algorithm is derived for the solution that fully exploits associated matrix sparsity and employs only real numbers in all relevant computations. It is also capable of determining multiple roots and proves to be most efficient when compared to other, similar, exisiting techniques.

  2. Two new exact solutions for relativistic perfect fluid spheres through Lake's algorithm

    NASA Astrophysics Data System (ADS)

    Maurya, S. K.; Gupta, Y. K.; Jasim, M. K.

    2015-02-01

    Two new exact solutions of Einstein's field equations for perfect fluid distribution are obtained using Lake's (Phys. Rev. D 67:104015, 2003) algorithm. The same are utilized to construct stellar models of physical relevance and possessing the maximum mass 2.6956 M Θ (quark star) and 0.9643 M Θ (white dwarfs) with the corresponding radius 20.5489 km and 3.1699 km respectively.

  3. Coupling of Monte Carlo adjoint leakages with three-dimensional discrete ordinates forward fluences

    SciTech Connect

    Slater, C.O.; Lillie, R.A.; Johnson, J.O.; Simpson, D.B.

    1998-04-01

    A computer code, DRC3, has been developed for coupling Monte Carlo adjoint leakages with three-dimensional discrete ordinates forward fluences in order to solve a special category of geometrically-complex deep penetration shielding problems. The code extends the capabilities of earlier methods that coupled Monte Carlo adjoint leakages with two-dimensional discrete ordinates forward fluences. The problems involve the calculation of fluences and responses in a perturbation to an otherwise simple two- or three-dimensional radiation field. In general, the perturbation complicates the geometry such that it cannot be modeled exactly using any of the discrete ordinates geometry options and thus a direct discrete ordinates solution is not possible. Also, the calculation of radiation transport from the source to the perturbation involves deep penetration. One approach to solving such problems is to perform the calculations in three steps: (1) a forward discrete ordinates calculation, (2) a localized adjoint Monte Carlo calculation, and (3) a coupling of forward fluences from the first calculation with adjoint leakages from the second calculation to obtain the response of interest (fluence, dose, etc.). A description of this approach is presented along with results from test problems used to verify the method. The test problems that were selected could also be solved directly by the discrete ordinates method. The good agreement between the DRC3 results and the direct-solution results verify the correctness of DRC3.

  4. Development of CO2 inversion system based on the adjoint of the global coupled transport model

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry; Maksyutov, Shamil; Chevallier, Frederic; Kaminski, Thomas; Ganshin, Alexander; Blessing, Simon

    2014-05-01

    We present the development of an inverse modeling system employing an adjoint of the global coupled transport model consisting of the National Institute for Environmental Studies (NIES) Eulerian transport model (TM) and the Lagrangian plume diffusion model (LPDM) FLEXPART. NIES TM is a three-dimensional atmospheric transport model, which solves the continuity equation for a number of atmospheric tracers on a grid spanning the entire globe. Spatial discretization is based on a reduced latitude-longitude grid and a hybrid sigma-isentropic coordinate in the vertical. NIES TM uses a horizontal resolution of 2.5°×2.5°. However, to resolve synoptic-scale tracer distributions and to have the ability to optimize fluxes at resolutions of 0.5° and higher we coupled NIES TM with the Lagrangian model FLEXPART. The Lagrangian component of the forward and adjoint models uses precalculated responses of the observed concentration to the surface fluxes and 3-D concentrations field simulated with the FLEXPART model. NIES TM and FLEXPART are driven by JRA-25/JCDAS reanalysis dataset. Construction of the adjoint of the Lagrangian part is less complicated, as LPDMs calculate the sensitivity of measurements to the surrounding emissions field by tracking a large number of "particles" backwards in time. Developing of the adjoint to Eulerian part was performed with automatic differentiation tool the Transformation of Algorithms in Fortran (TAF) software (http://www.FastOpt.com). This method leads to the discrete adjoint of NIES TM. The main advantage of the discrete adjoint is that the resulting gradients of the numerical cost function are exact, even for nonlinear algorithms. The overall advantages of our method are that: 1. No code modification of Lagrangian model is required, making it applicable to combination of global NIES TM and any Lagrangian model; 2. Once run, the Lagrangian output can be applied to any chemically neutral gas; 3. High-resolution results can be obtained over

  5. Airfoil design using a coupled euler and integral boundary layer method with adjoint based sensitivities

    NASA Astrophysics Data System (ADS)

    Edwards, S.; Reuther, J.; Chattot, J. J.

    The objective of this paper is to present a control theory approach for the design of airfoils in the presence of viscous compressible flows. A coupled system of the integral boundary layer and the Euler equations is solved to provide rapid flow simulations. An adjoint approach consistent with the complete coupled state equations is employed to obtain the sensitivities needed to drive a numerical optimization algorithm. Design to a target pressure distribution is demonstrated on an RAE 2822 airfoil at transonic speeds.

  6. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  7. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.; Brewe, David E.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  8. A new algorithm of ionospheric tomography——two-step solution

    NASA Astrophysics Data System (ADS)

    Wen, Debao

    The inherent non-ideal geometry of ground-based global navigation satellite system (GNSS) observation stations distribution results in limited-angle tomographic inverse problems that are ill-posed. To cope with the above problem, a new tomographic algorithm, which is called two-step solution (TSS), is presented in this paper. In the new method, the electron density can be estimated by using two steps: 1) Phillips smoothing method (PSM) is first used to resolve the ill-posed problem in ionospheric tomography system; 2) The "coarse" solution of PSM is then input as the initial value of multiplicative algebraic reconstruction technique (MART) and improved by iterative mode. Numerical simulation experiment demonstrates that the two-step solution is feasible to GNSS-based ionospheric tomography and superior to PSM or MART alone.

  9. The solution of the Elrod algorithm for a dynamically loaded journal bearing using multigrid techniques

    NASA Technical Reports Server (NTRS)

    Woods, C. M.; Brewe, D. E.

    1989-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed utilizing a multigrid iteration technique. The method is compared with a noniterative approach in terms of computational time and accuracy. The computational model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobsson-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via a smeared mass or striated flow extending to both surfaces in the film gap. The mixed nature of the equations (parabolic in the full film zone and hyperbolic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  10. Wing planform optimization via an adjoint method

    NASA Astrophysics Data System (ADS)

    Leoviriyakit, Kasidit

    This dissertation focuses on the problem of wing planform optimization for transonic aircraft based on flow simulation using Computational Fluid Dynamics (CFD) combined with an adjoint-gradient based numerical optimization procedure. The adjoint method, traditionally used for wing section design has been extended to cover planform variations and to compute the sensitivities of the structural weight of both the wing section and planform variations. The two relevant disciplines accounted for are the aerodynamics and structural weight. A simplified structural weight model is used for the optimization. Results of a variety of long range transports indicate that significant improvement in both aerodynamics and structures can be achieved simultaneously. The proof-of-concept optimal results indicate large improvements for both drag and structural weight. The work is an "enabling step" towards a realistic automated wing designed by a computer.

  11. A Proposed Implementation of Tarjan's Algorithm for Scheduling the Solution Sequence of Systems of Federated Models

    SciTech Connect

    McNunn, Gabriel S; Bryden, Kenneth M

    2013-01-01

    Tarjan's algorithm schedules the solution of systems of equations by noting the coupling and grouping between the equations. Simulating complex systems, e.g., advanced power plants, aerodynamic systems, or the multi-scale design of components, requires the linkage of large groups of coupled models. Currently, this is handled manually in systems modeling packages. That is, the analyst explicitly defines both the method and solution sequence necessary to couple the models. In small systems of models and equations this works well. However, as additional detail is needed across systems and across scales, the number of models grows rapidly. This precludes the manual assembly of large systems of federated models, particularly in systems composed of high fidelity models. This paper examines extending Tarjan's algorithm from sets of equations to sets of models. The proposed implementation of the algorithm is demonstrated using a small one-dimensional system of federated models representing the heat transfer and thermal stress in a gas turbine blade with thermal barrier coating. Enabling the rapid assembly and substitution of different models permits the rapid turnaround needed to support the “what-if” kinds of questions that arise in engineering design.

  12. A conjugate gradient algorithm for the astrometric core solution of Gaia

    NASA Astrophysics Data System (ADS)

    Bombrun, A.; Lindegren, L.; Hobbs, D.; Holl, B.; Lammers, U.; Bastian, U.

    2012-02-01

    Context. The ESA space astrometry mission Gaia, planned to be launched in 2013, has been designed to make angular measurements on a global scale with micro-arcsecond accuracy. A key component of the data processing for Gaia is the astrometric core solution, which must implement an efficient and accurate numerical algorithm to solve the resulting, extremely large least-squares problem. The Astrometric Global Iterative Solution (AGIS) is a framework that allows to implement a range of different iterative solution schemes suitable for a scanning astrometric satellite. Aims: Our aim is to find a computationally efficient and numerically accurate iteration scheme for the astrometric solution, compatible with the AGIS framework, and a convergence criterion for deciding when to stop the iterations. Methods: We study an adaptation of the classical conjugate gradient (CG) algorithm, and compare it to the so-called simple iteration (SI) scheme that was previously known to converge for this problem, although very slowly. The different schemes are implemented within a software test bed for AGIS known as AGISLab. This allows to define, simulate and study scaled astrometric core solutions with a much smaller number of unknowns than in AGIS, and therefore to perform a large number of numerical experiments in a reasonable time. After successful testing in AGISLab, the CG scheme has been implemented also in AGIS. Results: The two algorithms CG and SI eventually converge to identical solutions, to within the numerical noise (of the order of 0.00001 micro-arcsec). These solutions are moreover independent of the starting values (initial star catalogue), and we conclude that they are equivalent to a rigorous least-squares estimation of the astrometric parameters. The CG scheme converges up to a factor four faster than SI in the tested cases, and in particular spatially correlated truncation errors are much more efficiently damped out with the CG scheme. While it appears to be difficult

  13. Multiple Moment Tensor Inversions For the December 26, 2004 Sumatra Earthquake Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Ren, L.; Liu, Q.

    2012-12-01

    We present multiple moment-tensor solution of the December 26, 2004 Sumatra earthquake based upon adjoint methods. An objective function Φ that measures the goodness of waveform fit between data and synthetics is minimized. Synthetics are calculated by spectral-element simulations (SPECFEM3D_GLOBE) in a 3D global earth model S362ANI to reduce the effect of heterogeneous structures. The Fréchet derivatives of Φ in the form δΦ = ∫T ∫VI(ɛ †ij)(x,T-t) δ(m_dot)ij(x,t)d3xdt, where δmij is the perturbation of moment density function and I(ɛ†ij)(x,T-t) denotes the time-integrated adjoint strain tensor, are calculated based upon adjoint methods implemented in SPECFEM3D_GLOBE. Our initial source model is obtained by monitoring the time-integrated adjoint strain tensors in the vicinity of the presumed source region. Source model parameters are iteratively updated by a preconditioned conjugate-gradient method to iteratively utilizing the calculated Φ and δΦ values. Our final inversion results show both similarities to and differences from previous source inversion results based on 1D background models.

  14. Multiple Moment Tensor Inversions For the December 26, 2004 Sumatra Earthquake Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Ren, L.; Liu, Q.; Hjörleifsdóttir, V.

    2010-12-01

    We present multiple moment-tensor solution of the Dec 26, 2004 Sumatra earthquake based upon the adjoint methods. An objective function Φ(m), where m is the multiple source model, measures the goodness of waveform fit between data and synthetics. The Fréchet derivatives of Φ in the form δΦ = ∫∫I(ɛ†)(x,T-t)δmij_dot(x,t)dVdt, where δmij is the source model perturbation and I(ɛ†)(x,T-t) denotes the time-integrated adjoint strain tensor, are calculated based upon adjoint methods and spectral-element simulations (SPECFEM3D_GLOBE) in a 3D global earth model S362ANI. Our initial source model is obtained independently by monitoring the time-integrated adjoint strain tensors around the presumed source region. We then utilize the Φ and δΦ calculations in a conjugate-gradient method to iteratively invert for the source model. Our final inversion results show both similarities with and differences to previous source inversion results based on 1D earth models.

  15. Supersymmetric descendants of self-adjointly extended quantum mechanical Hamiltonians

    NASA Astrophysics Data System (ADS)

    Al-Hashimi, M. H.; Salman, M.; Shalaby, A.; Wiese, U.-J.

    2013-10-01

    We consider the descendants of self-adjointly extended Hamiltonians in supersymmetric quantum mechanics on a half-line, on an interval, and on a punctured line or interval. While there is a 4-parameter family of self-adjointly extended Hamiltonians on a punctured line, only a 3-parameter sub-family has supersymmetric descendants that are themselves self-adjoint. We also address the self-adjointness of an operator related to the supercharge, and point out that only a sub-class of its most general self-adjoint extensions is physical. Besides a general characterization of self-adjoint extensions and their supersymmetric descendants, we explicitly consider concrete examples, including a particle in a box with general boundary conditions, with and without an additional point interaction. We also discuss bulk-boundary resonances and their manifestation in the supersymmetric descendant.

  16. An Evaluation of Solution Algorithms and Numerical Approximation Methods for Modeling an Ion Exchange Process

    PubMed Central

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-01-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570

  17. An evaluation of solution algorithms and numerical approximation methods for modeling an ion exchange process

    NASA Astrophysics Data System (ADS)

    Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.

    2010-07-01

    The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.

  18. Quality analysis of the solution produced by dissection algorithms applied to the traveling salesman problem

    SciTech Connect

    Cesari, G.

    1994-12-31

    The aim of this paper is to analyze experimentally the quality of the solution obtained with dissection algorithms applied to the geometric Traveling Salesman Problem. Starting from Karp`s results. We apply a divide and conquer strategy, first dividing the plane into subregions where we calculate optimal subtours and then merging these subtours to obtain the final tour. The analysis is restricted to problem instances where points are uniformly distributed in the unit square. For relatively small sets of cities we analyze the quality of the solution by calculating the length of the optimal tour and by comparing it with our approximate solution. When the problem instance is too large we perform an asymptotical analysis estimating the length of the optimal tour. We apply the same dissection strategy also to classical heuristics by calculating approximate subtours and by comparing the results with the average quality of the heuristic. Our main result is the estimate of the rate of convergence of the approximate solution to the optimal solution as a function of the number of dissection steps, of the criterion used for the plane division and of the quality of the subtours. We have implemented our programs on MUSIC (MUlti Signal processor system with Intelligent Communication), a Single-Program-Multiple-Data parallel computer with distributed memory developed at the ETH Zurich.

  19. On substructuring algorithms and solution techniques for the numerical approximation of partial differential equations

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D.; Nicolaides, R. A.

    1986-01-01

    Substructuring methods are in common use in mechanics problems where typically the associated linear systems of algebraic equations are positive definite. Here these methods are extended to problems which lead to nonpositive definite, nonsymmetric matrices. The extension is based on an algorithm which carries out the block Gauss elimination procedure without the need for interchanges even when a pivot matrix is singular. Examples are provided wherein the method is used in connection with finite element solutions of the stationary Stokes equations and the Helmholtz equation, and dual methods for second-order elliptic equations.

  20. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one, when the…

  1. Implementation of a Multichannel Serial Data Streaming Algorithm using the Xilinx Serial RapidIO Solution

    NASA Technical Reports Server (NTRS)

    Doxley, Charles A.

    2016-01-01

    In the current world of applications that use reconfigurable technology implemented on field programmable gate arrays (FPGAs), there is a need for flexible architectures that can grow as the systems evolve. A project has limited resources and a fixed set of requirements that development efforts are tasked to meet. Designers must develop robust solutions that practically meet the current customer demands and also have the ability to grow for future performance. This paper describes the development of a high speed serial data streaming algorithm that allows for transmission of multiple data channels over a single serial link. The technique has the ability to change to meet new applications developed for future design considerations. This approach uses the Xilinx Serial RapidIO LOGICORE Solution to implement a flexible infrastructure to meet the current project requirements with the ability to adapt future system designs.

  2. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  3. Efficient solution of liquid state integral equations using the Newton-GMRES algorithm

    NASA Astrophysics Data System (ADS)

    Booth, Michael J.; Schlijper, A. G.; Scales, L. E.; Haymet, A. D. J.

    1999-06-01

    We present examples of the accurate, robust and efficient solution of Ornstein-Zernike type integral equations which describe the structure of both homogeneous and inhomogeneous fluids. In this work we use the Newton-GMRES algorithm as implemented in the public-domain nonlinear Krylov solvers NKSOL [ P. Brown, Y. Saad, SIAM J. Sci. Stat. Comput. 11 (1990) 450] and NITSOL [ M. Pernice, H.F. Walker, SIAM J. Sci. Comput. 19 (1998) 302]. We compare and contrast this method with more traditional approaches in the literature, using Picard iteration (successive-substitution) and hybrid Newton-Raphson and Picard methods, and a recent vector extrapolation method [ H.H.H. Homeier, S. Rast, H. Krienke, Comput. Phys. Commun. 92 (1995) 188]. We find that both the performance and ease of implementation of these nonlinear solvers recommend them for the solution of this class of problem.

  4. A Solution Method of Scheduling Problem with Worker Allocation by a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Osawa, Akira; Ida, Kenichi

    In a scheduling problem with worker allocation (SPWA) proposed by Iima et al, the worker's skill level to each machine is all the same. However, each worker has a different skill level for each machine in the real world. For that reason, we propose a new model of SPWA in which a worker has the different skill level to each machine. To solve the problem, we propose a new GA for SPWA consisting of the following new three procedures, shortening of idle time, modifying infeasible solution to feasible solution, and a new selection method for GA. The effectiveness of the proposed algorithm is clarified by numerical experiments using benchmark problems for job-shop scheduling.

  5. Algorithmic Construction of Exact Solutions for Neutral Static Perfect Fluid Spheres

    NASA Astrophysics Data System (ADS)

    Hansraj, Sudan; Krupanandan, Daniel

    2013-07-01

    Although it ranks amongst the oldest of problems in classical general relativity, the challenge of finding new exact solutions for spherically symmetric perfect fluid spacetimes is still ongoing because of a paucity of solutions which exhibit the necessary qualitative features compatible with observational evidence. The problem amounts to solving a system of three partial differential equations in four variables, which means that any one of four geometric or dynamical quantities must be specified at the outset and the others should follow by integration. The condition of pressure isotropy yields a differential equation that may be interpreted as second-order in one of the space variables or also as first-order Ricatti type in the other space variable. This second option has been fruitful in allowing us to construct an algorithm to generate a complete solution to the Einstein field equations once a geometric variable is specified ab initio. We then demonstrate the construction of previously unreported solutions and examine these for physical plausibility as candidates to represent real matter. In particular we demand positive definiteness of pressure, density as well as a subluminal sound speed. Additionally, we require the existence of a hypersurface of vanishing pressure to identify a radius for the closed distribution of fluid. Finally, we examine the energy conditions. We exhibit models which display all of these elementary physical requirements.

  6. Construction of the adjoint MIT ocean general circulation model and application to Atlantic heat transport sensitivity

    NASA Astrophysics Data System (ADS)

    Marotzke, Jochem; Giering, Ralf; Zhang, Kate Q.; Stammer, Detlef; Hill, Chris; Lee, Tong

    1999-12-01

    We first describe the principles and practical considerations behind the computer generation of the adjoint to the Massachusetts Institute of Technology ocean general circulation model (GCM) using R. Giering's software tool Tangent-Linear and Adjoint Model Compiler (TAMC). The TAMC's recipe for (FORTRAN-) line-by-line generation of adjoint code is explained by interpreting an adjoint model strictly as the operator that gives the sensitivity of the output of a model to its input. Then, the sensitivity of 1993 annual mean heat transport across 29°N in the Atlantic, to the hydrography on January 1, 1993, is calculated from a global solution of the GCM. The "kinematic sensitivity" to initial temperature variations is isolated, showing how the latter would influence heat transport if they did not affect the density and hence the flow. Over 1 year the heat transport at 29°N is influenced kinematically from regions up to 20° upstream in the western boundary current and up to 5° upstream in the interior. In contrast, the dynamical influences of initial temperature (and salinity) perturbations spread from as far as the rim of the Labrador Sea to the 29°N section along the western boundary. The sensitivities calculated with the adjoint compare excellently to those from a perturbation calculation with the dynamical model. Perturbations in initial interior salinity influence meridional overturning and heat transport when they have propagated to the western boundary and can thus influence the integrated east-west density difference. Our results support the notion that boundary monitoring of meridional mass and heat transports is feasible.

  7. Optimization of the K-means algorithm for the solution of high dimensional instances

    NASA Astrophysics Data System (ADS)

    Pérez, Joaquín; Pazos, Rodolfo; Olivares, Víctor; Hidalgo, Miguel; Ruiz, Jorge; Martínez, Alicia; Almanza, Nelva; González, Moisés

    2016-06-01

    This paper addresses the problem of clustering instances with a high number of dimensions. In particular, a new heuristic for reducing the complexity of the K-means algorithm is proposed. Traditionally, there are two approaches that deal with the clustering of instances with high dimensionality. The first executes a preprocessing step to remove those attributes of limited importance. The second, called divide and conquer, creates subsets that are clustered separately and later their results are integrated through post-processing. In contrast, this paper proposes a new solution which consists of the reduction of distance calculations from the objects to the centroids at the classification step. This heuristic is derived from the visual observation of the clustering process of K-means, in which it was found that the objects can only migrate to adjacent clusters without crossing distant clusters. Therefore, this heuristic can significantly reduce the number of distance calculations from an object to the centroids of the potential clusters that it may be classified to. To validate the proposed heuristic, it was designed a set of experiments with synthetic and high dimensional instances. One of the most notable results was obtained for an instance of 25,000 objects and 200 dimensions, where its execution time was reduced up to 96.5% and the quality of the solution decreased by only 0.24% when compared to the K-means algorithm.

  8. A modern solver framework to manage solution algorithms in the Community Earth System Model

    SciTech Connect

    Evans, Katherine J; Worley, Patrick H; Nichols, Dr Jeff A; WhiteIII, James B; Salinger, Andy; Price, Stephen; Lemieux, Jean-Francois; Lipscomb, William; Perego, Mauro; Vertenstein, Mariana; Edwards, Jim

    2012-01-01

    Global Earth-system models (ESM) can now produce simulations that resolve ~50 km features and include finer-scale, interacting physical processes. In order to achieve these scale-length solutions, ESMs require smaller time steps, which limits parallel performance. Solution methods that overcome these bottlenecks can be quite intricate, and there is no single set of algorithms that perform well across the range of problems of interest. This creates significant implementation challenges, which is further compounded by complexity of ESMs. Therefore, prototyping and evaluating new algorithms in these models requires a software framework that is flexible, extensible, and easily introduced into the existing software. We describe our efforts to create a parallel solver framework that links the Trilinos library of solvers to Glimmer-CISM, a continental ice sheet model used in the Community Earth System Model (CESM). We demonstrate this framework within both current and developmental versions of Glimmer-CISM and provide strategies for its integration into the rest of the CESM.

  9. A family of Eulerian-Lagrangian localized adjoint methods for multi-dimensional advection-reaction equations

    SciTech Connect

    Wang, H.; Man, S.; Ewing, R.E.; Qin, G.; Lyons, S.L.; Al-Lawatia, M.

    1999-06-10

    Many difficult problems arise in the numerical simulation of fluid flow processes within porous media in petroleum reservoir simulation and in subsurface contaminant transport and remediation. The authors develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of the initial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains. Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derived schemes, which are full mass conservative, naturally incorporate inflow boundary conditions into their formulations and do not need any artificial outflow boundary conditions. Moreover, they have regularly structured, well-conditioned, symmetric, and positive-definite coefficient matrices, which can be efficiently solved by the conjugate gradient method in an optimal order number of iterations without any preconditioning needed. Numerical results are presented to compare the performance of the ELLAM schemes with many well studied and widely used methods, including the upwind finite difference method, the Galerkin and the Petrov-Galerkin finite element methods with backward-Euler or Crank-Nicolson temporal discretization, the streamline diffusion finite element methods, the monotonic upstream-centered scheme for conservation laws (MUSCL), and the Minmod scheme.

  10. A finite-difference approximate-factorization algorithm for solution of the unsteady transonic small-disturbance equation

    NASA Technical Reports Server (NTRS)

    Batina, John T.

    1992-01-01

    A time-accurate approximate-factorization (AF) algorithm is described for solution of the three-dimensional unsteady transonic small-disturbance equation. The AF algorithm consists of a time-linearization procedure coupled with a subiteration technique. The algorithm is the basis for the Computational Aeroelasticity Program-Transonic Small Disturbance (CAP-TSD) computer code, which was developed for the analysis of unsteady aerodynamics and aeroelasticity of realistic aircraft configurations. The paper describes details on the governing flow equations and boundary conditions, with an emphasis on documenting the finite-difference formulas of the AF algorithm.

  11. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  12. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  13. Adjoint methods for external beam inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Kowalok, Michael E.

    Forward and adjoint radiation transport methods may both be used to determine the dosimetric relationship between source parameters and voxel elements of a phantom. Forward methods consider one specific tuple of source parameters and calculate the response in all voxels of interest. This response is often cast as the dose delivered per unit source-weight. Adjoint transport methods, conversely, consider one particular voxel and calculate the response of that voxel in relation to all possible source parameters. In this regard, adjoint methods provide an "adjoint function" in addition to a dose value. Although the dose is for a single voxel only, the adjoint function illustrates the source parameters, (e.g. beam positions and directions) that are most important to delivering the dose to that voxel. In this regard, adjoint methods of analysis lend themselves in a natural way to optimization problems and perturbation studies. This work investigates the utility of adjoint analytic methods for treatment planning and for Monte Carlo dose calculations. Various methods for implementing this approach are discussed, along with their strengths and weaknesses. The complementary nature of adjoint and forward techniques is illustrated and exploited. Also, several features of the Monte Carlo codes MCNP and MCNPX are reviewed for treatment planning applications.

  14. Supersymmetric descendants of self-adjointly extended quantum mechanical Hamiltonians

    SciTech Connect

    Al-Hashimi, M.H.; Salman, M.; Shalaby, A.; Wiese, U.-J.

    2013-10-15

    We consider the descendants of self-adjointly extended Hamiltonians in supersymmetric quantum mechanics on a half-line, on an interval, and on a punctured line or interval. While there is a 4-parameter family of self-adjointly extended Hamiltonians on a punctured line, only a 3-parameter sub-family has supersymmetric descendants that are themselves self-adjoint. We also address the self-adjointness of an operator related to the supercharge, and point out that only a sub-class of its most general self-adjoint extensions is physical. Besides a general characterization of self-adjoint extensions and their supersymmetric descendants, we explicitly consider concrete examples, including a particle in a box with general boundary conditions, with and without an additional point interaction. We also discuss bulk-boundary resonances and their manifestation in the supersymmetric descendant. -- Highlights: •Self-adjoint extension theory and contact interactions. •Application of self-adjoint extensions to supersymmetry. •Contact interactions in finite volume with Robin boundary condition.

  15. The compressible adjoint equations in geodynamics: equations and numerical assessment

    NASA Astrophysics Data System (ADS)

    Ghelichkhan, Siavash; Bunge, Hans-Peter

    2016-04-01

    The adjoint method is a powerful means to obtain gradient information in a mantle convection model relative to past flow structure. While the adjoint equations in geodynamics have been derived for the conservation equations of mantle flow in their incompressible form, the applicability of this approximation to Earth is limited, because density increases by almost a factor of two from the surface to the Core Mantle Boundary. Here we introduce the compressible adjoint equations for the conservation equations in the anelastic-liquid approximation. Our derivation applies an operator formulation in Hilbert spaces, to connect to recent work in seismology (Fichtner et al (2006)) and geodynamics (Horbach et al (2014)), where the approach was used to derive the adjoint equations for the wave equation and incompressible mantle flow. We present numerical tests of the newly derived equations based on twin experiments, focusing on three simulations. A first, termed Compressible, assumes the compressible forward and adjoint equations, and represents the consistent means of including compressibility effects. A second, termed Mixed, applies the compressible forward equation, but ignores compressibility effects in the adjoint equations, where the incompressible equations are used instead. A third simulation, termed Incompressible, neglects compressibility effects entirely in the forward and adjoint equations relative to the reference twin. The compressible and mixed formulations successfully restore earlier mantle flow structure, while the incompressible formulation yields noticeable artifacts. Our results suggest the use of a compressible formulation, when applying the adjoint method to seismically derived mantle heterogeneity structure.

  16. Efficient checkpointing schemes for depletion perturbation solutions on memory-limited architectures

    SciTech Connect

    Stripling, H. F.; Adams, M. L.; Hawkins, W. D.

    2013-07-01

    We describe a methodology for decreasing the memory footprint and machine I/O load associated with the need to access a forward solution during an adjoint solve. Specifically, we are interested in the depletion perturbation equations, where terms in the adjoint Bateman and transport equations depend on the forward flux solution. Checkpointing is the procedure of storing snapshots of the forward solution to disk and using these snapshots to recompute the parts of the forward solution that are necessary for the adjoint solve. For large problems, however, the storage cost of just a few copies of an angular flux vector can exceed the available RAM on the host machine. We propose a methodology that does not checkpoint the angular flux vector; instead, we write and store converged source moments, which are typically of a much lower dimension than the angular flux solution. This reduces the memory footprint and I/O load of the problem, but requires that we perform single sweeps to reconstruct flux vectors on demand. We argue that this trade-off is exactly the kind of algorithm that will scale on advanced, memory-limited architectures. We analyze the cost, in terms of FLOPS and memory footprint, of five checkpointing schemes. We also provide computational results that support the analysis and show that the memory-for-work trade off does improve time to solution. (authors)

  17. Modeling the Pulse Line Ion Accelerator (PLIA): an algorithm for quasi-static field solution.

    SciTech Connect

    Friedman, A; Briggs, R J; Grote, D P; Henestroza, E; Waldron, W L

    2007-06-18

    The Pulse-Line Ion Accelerator (PLIA) is a helical distributed transmission line. A rising pulse applied to the upstream end appears as a moving spatial voltage ramp, on which an ion pulse can be accelerated. This is a promising approach to acceleration and longitudinal compression of an ion beam at high line charge density. In most of the studies carried out to date, using both a simple code for longitudinal beam dynamics and the Warp PIC code, a circuit model for the wave behavior was employed; in Warp, the helix I and V are source terms in elliptic equations for E and B. However, it appears possible to obtain improved fidelity using a ''sheath helix'' model in the quasi-static limit. Here we describe an algorithmic approach that may be used to effect such a solution.

  18. Numerical solution of the Richards equation based catchment runoff model with dd-adaptivity algorithm

    NASA Astrophysics Data System (ADS)

    Kuraz, Michal

    2016-06-01

    This paper presents pseudo-deterministic catchment runoff model based on the Richards equation model [1] - the governing equation for the subsurface flow. The subsurface flow in a catchment is described here by two-dimensional variably saturated flow (unsaturated and saturated). The governing equation is the Richards equation with a slight modification of the time derivative term as considered e.g. by Neuman [2]. The nonlinear nature of this problem appears in unsaturated zone only, however the delineation of the saturated zone boundary is a nonlinear computationally expensive issue. The simple one-dimensional Boussinesq equation was used here as a rough estimator of the saturated zone boundary. With this estimate the dd-adaptivity algorithm (see Kuraz et al. [4, 5, 6]) could always start with an optimal subdomain split, so it is now possible to avoid solutions of huge systems of linear equations in the initial iteration level of our Richards equation based runoff model.

  19. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean

  20. Adjoint of the global Eulerian-Lagrangian coupled atmospheric transport model (A-GELCA v1.0): development and validation

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry A.; Maksyutov, Shamil; Yaremchuk, Alexey; Ganshin, Alexander; Kaminski, Thomas; Blessing, Simon; Sasakawa, Motoki; Gomez-Pelaez, Angel J.; Starchenko, Alexander

    2016-02-01

    We present the development of the Adjoint of the Global Eulerian-Lagrangian Coupled Atmospheric (A-GELCA) model that consists of the National Institute for Environmental Studies (NIES) model as an Eulerian three-dimensional transport model (TM), and FLEXPART (FLEXible PARTicle dispersion model) as the Lagrangian Particle Dispersion Model (LPDM). The forward tangent linear and adjoint components of the Eulerian model were constructed directly from the original NIES TM code using an automatic differentiation tool known as TAF (Transformation of Algorithms in Fortran; http://www.FastOpt.com, with additional manual pre- and post-processing aimed at improving transparency and clarity of the code and optimizing the performance of the computing, including MPI (Message Passing Interface). The Lagrangian component did not require any code modification, as LPDMs are self-adjoint and track a significant number of particles backward in time in order to calculate the sensitivity of the observations to the neighboring emission areas. The constructed Eulerian adjoint was coupled with the Lagrangian component at a time boundary in the global domain. The simulations presented in this work were performed using the A-GELCA model in forward and adjoint modes. The forward simulation shows that the coupled model improves reproduction of the seasonal cycle and short-term variability of CO2. Mean bias and standard deviation for five of the six Siberian sites considered decrease roughly by 1 ppm when using the coupled model. The adjoint of the Eulerian model was shown, through several numerical tests, to be very accurate (within machine epsilon with mismatch around to ±6 e-14) compared to direct forward sensitivity calculations. The developed adjoint of the coupled model combines the flux conservation and stability of an Eulerian discrete adjoint formulation with the flexibility, accuracy, and high resolution of a Lagrangian backward trajectory formulation. A-GELCA will be incorporated

  1. Adjoint equations and analysis of complex systems: Application to virus infection modelling

    NASA Astrophysics Data System (ADS)

    Marchuk, G. I.; Shutyaev, V.; Bocharov, G.

    2005-12-01

    Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.

  2. Development of Gis Tool for the Solution of Minimum Spanning Tree Problem using Prim's Algorithm

    NASA Astrophysics Data System (ADS)

    Dutta, S.; Patra, D.; Shankar, H.; Alok Verma, P.

    2014-11-01

    minimum spanning tree (MST) of a connected, undirected and weighted network is a tree of that network consisting of all its nodes and the sum of weights of all its edges is minimum among all such possible spanning trees of the same network. In this study, we have developed a new GIS tool using most commonly known rudimentary algorithm called Prim's algorithm to construct the minimum spanning tree of a connected, undirected and weighted road network. This algorithm is based on the weight (adjacency) matrix of a weighted network and helps to solve complex network MST problem easily, efficiently and effectively. The selection of the appropriate algorithm is very essential otherwise it will be very hard to get an optimal result. In case of Road Transportation Network, it is very essential to find the optimal results by considering all the necessary points based on cost factor (time or distance). This paper is based on solving the Minimum Spanning Tree (MST) problem of a road network by finding it's minimum span by considering all the important network junction point. GIS technology is usually used to solve the network related problems like the optimal path problem, travelling salesman problem, vehicle routing problems, location-allocation problems etc. Therefore, in this study we have developed a customized GIS tool using Python script in ArcGIS software for the solution of MST problem for a Road Transportation Network of Dehradun city by considering distance and time as the impedance (cost) factors. It has a number of advantages like the users do not need a greater knowledge of the subject as the tool is user-friendly and that allows to access information varied and adapted the needs of the users. This GIS tool for MST can be applied for a nationwide plan called Prime Minister Gram Sadak Yojana in India to provide optimal all weather road connectivity to unconnected villages (points). This tool is also useful for constructing highways or railways spanning several

  3. A direct algorithm for solution of incompressible three-dimensional unsteady Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Osswald, G. A.; Ghia, K. N.; Ghia, U.

    1987-01-01

    A direct, implicit, numerical solution algorithm, with second-order accuracy in space and time, is constructed for the three-dimensional unsteady incompressible Navier-Stokes equations formulated in terms of velocity and vorticity, using generalized orthogonal coordinates to achieve the accurate solution of complex viscous flow configurations. A numerically stable, efficient, direct inversion procedure is developed for the computationally intensive divergence-curl elliptic velocity problem. This overdetermined partial differential operator is first formulated as a uniquely determined, nonsingular matrix-vector problem; this aspect of the procedure is a unique feature of the present analysis. The three-dimensional vorticity-transport equation is solved by a modified factorization technique which completely eliminates the need for any block-matrix inversions and only scalar tridiagonal matrices need to be inverted. The method is applied to the test problem of the three-dimensional flow within a shear-driven cubical box. Coherent streamwise vortex structures are observed within the steady-state flow at Re = 100.

  4. A diagonal algorithm for the method of pseudocompressibility. [for steady-state solution to incompressible Navier-Stokes equation

    NASA Technical Reports Server (NTRS)

    Rogers, S. E.; Kwak, D.; Chang, J. L. C.

    1986-01-01

    The method of pseudocompressibility has been shown to be an efficient method for obtaining a steady-state solution to the incompressible Navier-Stokes equations. Recent improvements to this method include the use of a diagonal scheme for the inversion of the equations at each iteration. The necessary transformations have been derived for the pseudocompressibility equations in generalized coordinates. The diagonal algorithm reduces the computing time necessary to obtain a steady-state solution by a factor of nearly three. Implicit viscous terms are maintained in the equations, and it has become possible to use fourth-order implicit dissipation. The steady-state solution is unchanged by the approximations resulting from the diagonalization of the equations. Computed results for flow over a two-dimensional backward-facing step and a three-dimensional cylinder mounted normal to a flat plate are presented for both the old and new algorithms. The accuracy and computing efficiency of these algorithms are compared.

  5. Second-order p-iterative solution of the Lambert/Gauss problem. [algorithm for efficient orbit determination

    NASA Technical Reports Server (NTRS)

    Boltz, F. W.

    1984-01-01

    An algorithm is presented for efficient p-iterative solution of the Lambert/Gauss orbit-determination problem using second-order Newton iteration. The algorithm is based on a universal transformation of Kepler's time-of-flight equation and approximate inverse solutions of this equation for short-way and long-way flight paths. The approximate solutions provide both good starting values for iteration and simplified computation of the second-order term in the iteration formula. Numerical results are presented which indicate that in many cases of practical significance (except those having collinear position vectors) the algorithm produces at least eight significant digits of accuracy with just two or three steps of iteration.

  6. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  7. Aerodynamic design optimization by using a continuous adjoint method

    NASA Astrophysics Data System (ADS)

    Luo, JiaQi; Xiong, JunTao; Liu, Feng

    2014-07-01

    This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.

  8. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-01-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  9. Dynamic simulation of concentrated macromolecular solutions with screened long-range hydrodynamic interactions: Algorithm and limitations

    NASA Astrophysics Data System (ADS)

    Ando, Tadashi; Chow, Edmond; Skolnick, Jeffrey

    2013-09-01

    Hydrodynamic interactions exert a critical effect on the dynamics of macromolecules. As the concentration of macromolecules increases, by analogy to the behavior of semidilute polymer solutions or the flow in porous media, one might expect hydrodynamic screening to occur. Hydrodynamic screening would have implications both for the understanding of macromolecular dynamics as well as practical implications for the simulation of concentrated macromolecular solutions, e.g., in cells. Stokesian dynamics (SD) is one of the most accurate methods for simulating the motions of N particles suspended in a viscous fluid at low Reynolds number, in that it considers both far-field and near-field hydrodynamic interactions. This algorithm traditionally involves an O(N3) operation to compute Brownian forces at each time step, although asymptotically faster but more complex SD methods are now available. Motivated by the idea of hydrodynamic screening, the far-field part of the hydrodynamic matrix in SD may be approximated by a diagonal matrix, which is equivalent to assuming that long range hydrodynamic interactions are completely screened. This approximation allows sparse matrix methods to be used, which can reduce the apparent computational scaling to O(N). Previously there were several simulation studies using this approximation for monodisperse suspensions. Here, we employ newly designed preconditioned iterative methods for both the computation of Brownian forces and the solution of linear systems, and consider the validity of this approximation in polydisperse suspensions. We evaluate the accuracy of the diagonal approximation method using an intracellular-like suspension. The diffusivities of particles obtained with this approximation are close to those with the original method. However, this approximation underestimates intermolecular correlated motions, which is a trade-off between accuracy and computing efficiency. The new method makes it possible to perform large-scale and

  10. Adjoint active surfaces for localization and imaging.

    PubMed

    Cook, Daniel A; Mueller, Martin Fritz; Fedele, Francesco; Yezzi, Anthony J

    2015-01-01

    This paper addresses the problem of localizing and segmenting regions embedded within a surrounding medium by characterizing their boundaries, as opposed to imaging the entirety of the volume. Active surfaces are used to directly reconstruct the shape of the region of interest. We describe the procedure for finding the optimal surface, which is computed iteratively via gradient descent that exploits the sensitivity of an error minimization functional to changes of the active surface. In doing so, we introduce the adjoint model to compute the sensitivity, and in this respect, the method shares common ground with several other disciplines, such as optimal control. Finally, we illustrate the proposed active surface technique in the framework of wave propagation governed by the scalar Helmholtz equation. Potential applications include electromagnetics, acoustics, geophysics, nondestructive testing, and medical imaging. PMID:25438311

  11. Adjoint tomography of the southern California crust.

    PubMed

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2009-08-21

    Using an inversion strategy based on adjoint methods, we developed a three-dimensional seismological model of the southern California crust. The resulting model involved 16 tomographic iterations, which required 6800 wavefield simulations and a total of 0.8 million central processing unit hours. The new crustal model reveals strong heterogeneity, including local changes of +/-30% with respect to the initial three-dimensional model provided by the Southern California Earthquake Center. The model illuminates shallow features such as sedimentary basins and compositional contrasts across faults. It also reveals crustal features at depth that aid in the tectonic reconstruction of southern California, such as subduction-captured oceanic crustal fragments. The new model enables more realistic and accurate assessments of seismic hazard. PMID:19696349

  12. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  13. Self-adjointness and conservation laws of difference equations

    NASA Astrophysics Data System (ADS)

    Peng, Linyu

    2015-06-01

    A general theorem on conservation laws for arbitrary difference equations is proved. The theorem is based on an introduction of an adjoint system related with a given difference system, and it does not require the existence of a difference Lagrangian. It is proved that the system, combined by the original system and its adjoint system, is governed by a variational principle, which inherits all symmetries of the original system. Noether's theorem can then be applied. With some special techniques, e.g. self-adjointness properties, this allows us to obtain conservation laws for difference equations, which are not necessary governed by Lagrangian formalisms.

  14. A forward operator and its adjoint for GPS slant total delays

    NASA Astrophysics Data System (ADS)

    Zus, Florian; Dick, Galina; Heise, Stefan; Wickert, Jens

    2015-05-01

    In a recent study we developed a fast and accurate algorithm to compute Global Positioning System (GPS) Slant Total Delay (STDs) utilizing numerical weather model data. Having developed a forward operator we construct in this study the tangent linear (adjoint) operator by application of the chain rule of differential calculus in forward (reverse) mode. Armed with these operators we show in a simulation study the potential benefit of GPS STDs in inverse modeling. We conclude that the developed operators are tailored for three (four)-dimensional variational data assimilation and/or travel time tomography.

  15. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  16. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  17. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  18. A Multilevel Algorithm for the Solution of Second Order Elliptic Differential Equations on Sparse Grids

    NASA Technical Reports Server (NTRS)

    Pflaum, Christoph

    1996-01-01

    A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.

  19. Efficient solution of the Euler and Navier-Stokes equations with a vectorized multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solutions to the Euler and Navier-Stokes equations is presented. The convergence of the explicit MacCormack algorithm on a fine grid is accelerated by propagating transients from the domain using a sequence of successively coarser grids. Both the fine and coarse grid schemes are readily vectorizable. The combination of multiple-gridding and vectorization results in substantially reduced computational times for the numerical solution of a wide range of flow problems. Results are presented for subsonic, transonic, and supersonic inviscid flows and for subsonic attached and separated laminar viscous flows. Work reduction factors over a scalar, single-grid algorithm range as high as 76.8. Previously announced in STAR as N83-24467

  20. A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion

    SciTech Connect

    Jenny, Patrick Torrilhon, Manuel; Heinz, Stefan

    2010-02-20

    In this paper, a stochastic model is presented to simulate the flow of gases, which are not in thermodynamic equilibrium, like in rarefied or micro situations. For the interaction of a particle with others, statistical moments of the local ensemble have to be evaluated, but unlike in molecular dynamics simulations or DSMC, no collisions between computational particles are considered. In addition, a novel integration technique allows for time steps independent of the stochastic time scale. The stochastic model represents a Fokker-Planck equation in the kinetic description, which can be viewed as an approximation to the Boltzmann equation. This allows for a rigorous investigation of the relation between the new model and classical fluid and kinetic equations. The fluid dynamic equations of Navier-Stokes and Fourier are fully recovered for small relaxation times, while for larger values the new model extents into the kinetic regime. Numerical studies demonstrate that the stochastic model is consistent with Navier-Stokes in that limit, but also that the results become significantly different, if the conditions for equilibrium are invalid. The application to the Knudsen paradox demonstrates the correctness and relevance of this development, and comparisons with existing kinetic equations and standard solution algorithms reveal its advantages. Moreover, results of a test case with geometrically complex boundaries are presented.

  1. Surface wave sensitivity: mode summation versus adjoint SEM

    NASA Astrophysics Data System (ADS)

    Zhou, Ying; Liu, Qinya; Tromp, Jeroen

    2011-12-01

    We compare finite-frequency phase and amplitude sensitivity kernels calculated based on frequency-domain surface wave mode summation and a time-domain adjoint method. The adjoint calculations involve a forward wavefield generated by an earthquake and an adjoint wavefield generated at a seismic receiver. We determine adjoint sources corresponding to frequency-dependent phase and amplitude measurements made using a multitaper technique, which may be applied to any single-taper measurement, including box car windowing. We calculate phase and amplitude sensitivity kernels using an adjoint method based on wave propagation simulations using a spectral element method (SEM). Sensitivity kernels calculated using the adjoint SEM are in good agreement with kernels calculated based on mode summation. In general, the adjoint SEM is more computationally expensive than mode summation in global studies. The advantage of the adjoint SEM lies in the calculation of sensitivity kernels in 3-D earth models. We compare surface wave sensitivity kernels computed in 1-D and 3-D reference earth models and show that (1) lateral wave speed heterogeneities may affect the geometry and amplitude of surface wave sensitivity; (2) sensitivity kernels of long-period surface waves calculated in 1-D model PREM and 3-D models S20RTS+CRUST2.0 and FFSW1+CRUST2.0 do not show significant differences, indicating that the use of a 1-D reference model is adequate in global inversions of long-period surface waves (periods of 50 s and longer); and (3) the differences become significant for short-period Love waves when mode coupling is sensitive to large differences in reference crustal structure. Finally, we show that sensitivity kernels in anelastic earth models may be calculated in purely elastic earth models provided physical dispersion is properly accounted for.

  2. Adjoint Function: Physical Basis of Variational & Perturbation Theory in Transport

    2009-07-27

    Version 00 Dr. J.D. Lewins has now released the following legacy book for free distribution: Importance: The Adjoint Function: The Physical Basis of Variational and Perturbation Theory in Transport and Diffusion Problems, North-Holland Publishing Company - Amsterdam, 582 pages, 1966 Introduction: Continuous Systems and the Variational Principle 1. The Fundamental Variational Principle 2. The Importance Function 3. Adjoint Equations 4. Variational Methods 5. Perturbation and Iterative Methods 6. Non-Linear Theory

  3. A family of solution algorithms for nonlinear structural analysis based on relaxation equations

    NASA Technical Reports Server (NTRS)

    Park, K. C.

    1981-01-01

    A family of hierarchical algorithms for nonlinear structural equations are presented. The algorithms are based on the Davidenko-Branin type homotopy and shown to yield consistent hierarchical perturbation equations. The algorithms appear to be particularly suitable to problems involving bifurcation and limit point calculations. An important by-product of the algorithms is that it provides a systematic and economical means for computing the stepsize at each iteration stage when a Newton-like method is employed to solve the systems of equations. Some sample problems are provided to illustrate the characteristics of the algorithms.

  4. Universal Racah matrices and adjoint knot polynomials: Arborescent knots

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2016-04-01

    By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.

  5. Comparison of adjoint and nudging methods to initialise ice sheet model basal conditions

    NASA Astrophysics Data System (ADS)

    Mosbeux, Cyrille; Gillet-Chaulet, Fabien; Gagliardini, Olivier

    2016-07-01

    Ice flow models are now routinely used to forecast the ice sheets' contribution to 21st century sea-level rise. For such short term simulations, the model response is greatly affected by the initial conditions. Data assimilation algorithms have been developed to invert for the friction of the ice on its bedrock using observed surface velocities. A drawback of these methods is that remaining uncertainties, especially in the bedrock elevation, lead to non-physical ice flux divergence anomalies resulting in undesirable transient effects. In this study, we compare two different assimilation algorithms based on adjoints and nudging to constrain both bedrock friction and elevation. Using synthetic twin experiments with realistic observation errors, we show that the two algorithms lead to similar performances in reconstructing both variables and allow the flux divergence anomalies to be significantly reduced.

  6. Periodic differential equations with self-adjoint monodromy operator

    NASA Astrophysics Data System (ADS)

    Yudovich, V. I.

    2001-04-01

    A linear differential equation \\dot u=A(t)u with p-periodic (generally speaking, unbounded) operator coefficient in a Euclidean or a Hilbert space \\mathbb H is considered. It is proved under natural constraints that the monodromy operator U_p is self-adjoint and strictly positive if A^*(-t)=A(t) for all t\\in\\mathbb R.It is shown that Hamiltonian systems in the class under consideration are usually unstable and, if they are stable, then the operator U_p reduces to the identity and all solutions are p-periodic.For higher frequencies averaged equations are derived. Remarkably, high-frequency modulation may double the number of critical values.General results are applied to rotational flows with cylindrical components of the velocity a_r=a_z=0, a_\\theta=\\lambda c(t)r^\\beta, \\beta<-1, c(t) is an even p-periodic function, and also to several problems of free gravitational convection of fluids in periodic fields.

  7. Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods

    NASA Astrophysics Data System (ADS)

    Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco

    2015-04-01

    The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface

  8. Adjoint problem in duct acoustics and its reciprocity to forward problem by the Time Domain Wave Packet method

    NASA Astrophysics Data System (ADS)

    Kocaogul, Ibrahim; Hu, Fang; Li, Xiaodong

    2014-03-01

    Radiation of acoustic waves at all frequencies can be obtained by Time Domain Wave Packet (TDWP) method in a single time domain computation. Other benefit of the TDWP method is that it makes possible the separation of acoustic and instability wave in the shear flow. The TDWP method is also particularly useful for computations in the ducted or waveguide environments where incident wave modes can be imposed cleanly without a potentially long transient period. The adjoint equations for the linearized Euler equations are formulated for the Cartesian coordinates. Analytical solution for adjoint equations is derived by using Green's function in 2D and 3D. The derivation of reciprocal relations is presented for closed and open ducts. The adjoint equations are then solved numerically in reversed time by the TDWP method. Reciprocal relation between the duct mode amplitudes and far field point sources in the presence of the exhaust shear flow is computed and confirmed numerically. Applications of the adjoint problem to closed and open ducts are also presented.

  9. Baryogenesis via leptogenesis in adjoint SU(5)

    SciTech Connect

    Blanchet, Steve; Fileviez Perez, Pavel E-mail: fileviez@physics.wisc.edu

    2008-08-15

    The possibility of explaining the baryon asymmetry in the Universe through the leptogenesis mechanism in the context of adjoint SU(5) is investigated. In this model neutrino masses are generated through the type I and type III seesaw mechanisms, and the field responsible for the type III seesaw, called {rho}{sub 3}, generates the B-L asymmetry needed to satisfy the observed value of the baryon asymmetry in the Universe. We find that the CP asymmetry originates only from the vertex correction, since the self-energy contribution is not present. When neutrino masses have a normal hierarchy, successful leptogenesis is possible for 10{sup 11} GeV{approx}

  10. Hybrid algorithm: A cost efficient solution for ONU placement in Fiber-Wireless (FiWi) network

    NASA Astrophysics Data System (ADS)

    Bhatt, Uma Rathore; Chouhan, Nitin; Upadhyay, Raksha

    2015-03-01

    Fiber-Wireless (FiWi) network is a promising access technology as it integrates the technical merits of optical and wireless access networks. FiWi provides large bandwidth and high stability of optical network and lower cost of wireless network respectively. Therefore, FiWi gives users to access broadband services in an "anywhere-anytime" way. One of the key issues in FiWi network is its deployment cost, which depends on the number of ONUs in the network. Therefore optimal placement of ONUs is desirable to design a cost effective network. In this paper, we propose an algorithm for optimal placement of ONUs. First we place an ONU in the center of each grid then we form a set of wireless routers associated with each ONU according to wireless hop number. The number of ONUs are minimized in such a way, that all the wireless routers can communicate to at least one of the ONUs. The number of ONUs in the network further reduced by using genetic algorithm. The effectiveness of the proposed algorithm is tested by considering Internet traffic as well as peer-to-peer (p2p) traffic in the network, which is a current need. Simulation results show that the proposed algorithm is better than existing algorithms in minimizing number of ONUs in the network for both types of traffics. Hence proposed algorithm offers cost effective solution to design the FiWi network.

  11. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence. PMID:17649913

  12. Investigation of the Solution Space of Marine Controlled-Source Electromagnetic Inversion Problems By Using a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Hunziker, J.; Thorbecke, J.; Slob, E. C.

    2014-12-01

    Commonly, electromagnetic measurements for exploring and monitoring hydrocarbon reservoirs are inverted for the subsurface conductivity distribution by minimizing the difference between the actual data and a forward modeled dataset. The convergence of the inversion process to the correct solution strongly depends on the shape of the solution space. Since this is a non-linear problem, there exist a multitude of minima of which only the global one provides the correct conductivity values. To easily find the global minimum we desire it to have a broad cone of attraction, while it should also feature a very narrow bottom in order to obtain the subsurface conductivity with high resolution. In this study, we aim to determine which combination of input data corresponds to a favorable shape of the solution space. Since the solution space is N-dimensional, with N being the number of unknown subsurface parameters, plotting it is out of the question. In our approach, we use a genetic algorithm (Goldberg, 1989) to probe the solution space. Such algorithms have the advantage that every run of the same problem will end up at a different solution. Most of these solutions are expected to lie close to the global minimum. A situation where only few runs end up in the global minimum indicates that the solution space consists of a lot of local minima or that the cone of attraction of the global minimum is small. If a lot of runs end up with a similar data-misfit but with a large spread of the subsurface medium parameters in one or more direction, it can be concluded that the chosen data-input is not sensitive with respect to that direction. Compared to the study of Hunziker et al. 2014, we allow also to invert for subsurface boundaries and include more combinations of input datasets. The results so far suggest that it is essential to include the magnetic field in the inversion process in order to find the anisotropic conductivity values. ReferencesGoldberg, D. E., 1989. Genetic

  13. Diffusion Acceleration Schemes for Self-Adjoint Angular Flux Formulation with a Void Treatment

    SciTech Connect

    Yaqi Wang; Hongbin Zhang; Richard C. Martineau

    2014-02-01

    A Galerkin weak form for the monoenergetic neutron transport equation with a continuous finite element method and discrete ordinate method is developed based on self-adjoint angular flux formulation. This weak form is modified for treating void regions. A consistent diffusion scheme is developed with projection. Correction terms of the diffusion scheme are derived to reproduce the transport scalar flux. A source iteration that decouples the solution of all directions with both linear and nonlinear diffusion accelerations is developed and demonstrated. One-dimensional Fourier analysis is conducted to demonstrate the stability of the linear and nonlinear diffusion accelerations. Numerical results of these schemes are presented.

  14. Receptivity in parallel flows: An adjoint approach

    NASA Technical Reports Server (NTRS)

    Hill, D. Christopher

    1993-01-01

    Linear receptivity studies in parallel flows are aimed at understanding how external forcing couples to the natural unstable motions which a flow can support. The vibrating ribbon problem models the original Schubauer and Skramstad boundary layer experiment and represents the classic boundary layer receptivity problem. The process by which disturbances are initiated in convectively-unstable jets and shear layers has also received attention. Gaster was the first to handle the boundary layer analysis with the recognition that spatial modes, rather than temporal modes, were relevant when studying convectively-unstable flows that are driven by a time-harmonic source. The amplitude of the least stable spatial mode, far downstream of the source, is related to the source strength by a coupling coefficient. The determination of this coefficient is at the heart of this type of linear receptivity study. The first objective of the present study was to determine whether the various wave number derivative factors, appearing in the coupling coefficients for linear receptivity problems, could be reexpressed in a simpler form involving adjoint eigensolutions. Secondly, it was hoped that the general nature of this simplification could be shown; indeed, a rather elegant characterization of the receptivity properties of spatial instabilities does emerge. The analysis is quite distinct from the usual Fourier-inversion procedures, although a detailed knowledge of the spectrum of the Orr-Sommerfeld equation is still required. Since the cylinder wake analysis proved very useful in addressing control considerations, the final objective was to provide a foundation upon which boundary layer control theory may be developed.

  15. Adjoint simulation of stream depletion due to aquifer pumping.

    PubMed

    Neupauer, Roseanna M; Griebling, Scott A

    2012-01-01

    If an aquifer is hydraulically connected to an adjacent stream, a pumping well operating in the aquifer will draw some water from aquifer storage and some water from the stream, causing stream depletion. Several analytical, semi-analytical, and numerical approaches have been developed to estimate stream depletion due to pumping. These approaches are effective if the well location is known. If a new well is to be installed, it may be desirable to install the well at a location where stream depletion is minimal. If several possible locations are considered for the location of a new well, stream depletion would have to be estimated for all possible well locations, which can be computationally inefficient. The adjoint approach for estimating stream depletion is a more efficient alternative because with one simulation of the adjoint model, stream depletion can be estimated for pumping at a well at any location. We derive the adjoint equations for a coupled system with a confined aquifer, an overlying unconfined aquifer, and a river that is hydraulically connected to the unconfined aquifer. We assume that the stage in the river is known, and is independent of the stream depletion, consistent with the assumptions of the MODFLOW river package. We describe how the adjoint equations can be solved using MODFLOW. In an illustrative example, we show that for this scenario, the adjoint approach is as accurate as standard forward numerical simulation methods, and requires substantially less computational effort. PMID:22182421

  16. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  17. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  19. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.

  20. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGESBeta

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  1. Marble Algorithm: a solution to estimating ecological niches from presence-only records

    PubMed Central

    Qiao, Huijie; Lin, Congtian; Jiang, Zhigang; Ji, Liqiang

    2015-01-01

    We describe an algorithm that helps to predict potential distributional areas for species using presence-only records. The Marble Algorithm is a density-based clustering program based on Hutchinson’s concept of ecological niches as multidimensional hypervolumes in environmental space. The algorithm characterizes this niche space using the density-based spatial clustering of applications with noise (DBSCAN) algorithm. When MA is provided with a set of occurrence points in environmental space, the algorithm determines two parameters that allow the points to be grouped into several clusters. These clusters are used as reference sets describing the ecological niche, which can then be mapped onto geographic space and used as the potential distribution of the species. We used both virtual species and ten empirical datasets to compare MA with other distribution-modeling tools, including Bioclimate Analysis and Prediction System, Environmental Niche Factor Analysis, the Genetic Algorithm for Rule-set Production, Maximum Entropy Modeling, Artificial Neural Networks, Climate Space Models, Classification Tree Analysis, Generalised Additive Models, Generalised Boosted Models, Generalised Linear Models, Multivariate Adaptive Regression Splines and Random Forests. Results indicate that MA predicts potential distributional areas with high accuracy, moderate robustness, and above-average transferability on all datasets, particularly when dealing with small numbers of occurrences. PMID:26387771

  2. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach. PMID:24978258

  3. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  4. Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning.

    PubMed

    Dewitt, Don; Johnson-Williams, Nathan G; Miyaoka, Robert S; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K; Hauck, Scott

    2010-02-01

    We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 10(6) floating-point operations per event for an exhaustive search to < 5 × 10(3) integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135

  5. Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning

    PubMed Central

    DeWitt, Don; Johnson-Williams, Nathan G.; Miyaoka, Robert S.; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K.; Hauck, Scott

    2010-01-01

    We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 106 floating-point operations per event for an exhaustive search to < 5 × 103 integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135

  6. Solution of the hydrodynamic device model using high-order non-oscillatory shock capturing algorithms

    NASA Technical Reports Server (NTRS)

    Fatemi, Emad; Jerome, Joseph; Osher, Stanley

    1989-01-01

    A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.

  7. Reconstruction of ocean circulation from sparse data using the adjoint method: LGM and the present

    NASA Astrophysics Data System (ADS)

    Kurahashi-Nakamura, T.; Losch, M. J.; Paul, A.; Mulitza, S.; Schulz, M.

    2010-12-01

    Understanding the behavior of the Earth's climate system under different conditions in the past is the basis for more robust projections of future climate. It is thought that the ocean circulation plays a very important role in the climate system, because it can greatly affect climate by dynamic-thermodynamic (as a medium of heat transport) and biogeochemical processes (by affecting the global carbon cycle). In this context, studying the period of the Last Glacial Maximum (LGM) is particularly promising, as it represents a climate state that is very different from today. Furthermore the LGM, compared to other paleoperiods, is characterized by a relatively good paleo-data coverage. Unfortunately, the ocean circulation during the LGM is still uncertain, with a range of climate models estimating both a stronger and a weaker formation rate of North Atlantic Deep Water (NADW) as compared to the present rate. Here, we present a project aiming at reducing this uncertainty by combining proxy data with a numerical ocean model using variational techniques. Our approach, the so-called adjoint method, employs a quadratic cost function of model-data differences weighted by their prior error estimates. We seek an optimal state estimate at the global minimum of the cost function by varying the independent control variables such as initial conditions (e.g. temperature), boundary conditions (e.g. surface winds, heat flux), or internal parameters (e.g. vertical diffusivity). The adjoint or dual model computes the gradient of the cost function with respect to these control variables and thus provides the information required by gradient descent algorithms. The gradients themselves provide valuable information about the sensitivity of the system to perturbations in the control variables. We use the Massachusetts Institute of Technology ocean general circulation model (MITgcm) with a cubed-sphere grid system that avoids converging grid lines and pole singularities. This model code is

  8. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each

  9. Sensitivity analysis of numerically-simulated convective storms using direct and adjoint methods

    SciTech Connect

    Park, S.K.; Droegemeier, K.K.; Bischof, C.; Knauff, T.

    1994-06-01

    The goal of this project is to evaluate the sensitivity of numerically modeled convective storms to control parameters such as the initial conditions, boundary conditions, environment, and various physical and computational parameters. In other words, the authors seek the gradient of the solution vector with respect to specified parameters. One can use two approaches to accomplish this task. In the first or so-called brute force method, one uses a fully nonlinear model to generate a control forecast starting from a specified initial state. Then, a number of other forecasts are made in which chosen parameters (e.g., initial conditions) are systematically varied. The obvious drawback is that a large number of full model predictions are needed to examine the effects of only a single parameter. The authors describe herein an alternative, essentially automated method (ADIFOR, or Automatic DIfferentiation of FORtran) for obtaining the solution gradient that bypasses the adjoint altogether yet provides even more information about the gradient. (ADIFOR, like the adjoint technique, is constrained by the linearity assumption.) Applied to a 1-D moist cloud model, the authors assess the utility of ADIFOR relative to the brute force approach and evaluate the validity of the tangent linear approximation in the context of deep convection.

  10. Reduction in CS: A (Mostly) Quantitative Analysis of Reductive Solutions to Algorithmic Problems

    ERIC Educational Resources Information Center

    Armoni, Michal

    2009-01-01

    Reduction is a problem-solving strategy, relevant to various areas of computer science, and strongly connected to abstraction: a reductive solution necessitates establishing a connection among problems that may seem totally disconnected at first sight, and abstracts the solution to the reduced-to problem by encapsulating it as a black box. The…

  11. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  12. Self-Adjoint Angular Flux Equation for Coupled Electron-Photon Transport

    SciTech Connect

    Liscum-Powell, J.L.; Lorence, L.J. Jr.; Morel, J.E.; Prinja, A.K.

    1999-07-08

    Recently, Morel and McGhee described an alternate second-order form of the transport equation called the self adjoint angular flux (SAAF) equation that has the angular flux as its unknown. The SAAF formulation has all the advantages of the traditional even- and odd-parity self-adjoint equations, with the added advantages that it yields the full angular flux when it is numerically solved, it is significantly easier to implement reflective and reflective-like boundary conditions, and in the appropriate form it can be solved in void regions. The SAAF equation has the disadvantage that the angular domain is the full unit sphere and, like the even- and odd- parity form, S{sub n} source iteration cannot be implemented using the standard sweeping algorithm. Also, problems arise in pure scattering media. Morel and McGhee demonstrated the efficacy of the SAAF formulation for neutral particle transport. Here we apply the SAAF formulation to coupled electron-photon transport problems using multigroup cross-sections from the CEPXS code and S{sub n} discretization.

  13. Ocean acoustic tomography from different receiver geometries using the adjoint method.

    PubMed

    Zhao, Xiaofeng; Wang, Dongxiao

    2015-12-01

    In this paper, an ocean acoustic tomography inversion using the adjoint method in a shallow water environment is presented. The propagation model used is an implicit Crank-Nicolson finite difference parabolic equation solver with a non-local boundary condition. Unlike previous matched-field processing works using the complex pressure fields as the observations, here, the observed signals are the transmission losses. Based on the code tests of the tangent linear model, the adjoint model, and the gradient, the optimization problem is solved by a gradient-based minimization algorithm. The inversions are performed in numerical simulations for two geometries: one in which hydrophones are sparsely distributed in the horizontal direction, and another in which the hydrophones are distributed vertically. The spacing in both cases is well beyond the half-wavelength threshold at which beamforming could be used. To deal with the ill-posedness of the inverse problem, a linear differential regularization operator of the sound-speed profile is used to smooth the inversion results. The L-curve criterion is adopted to select the regularization parameter, and the optimal value can be easily determined at the elbow of the logarithms of the residual norm of the measured-predicted fields and the norm of the penalty function. PMID:26723329

  14. Monte Carlo solution methods in a moment-based scale-bridging algorithm for thermal radiative transfer problems: Comparison with Fleck and Cummings

    SciTech Connect

    Park, H.; Densmore, J. D.; Wollaber, A. B.; Knoll, D. A.; Rauenzahn, R. M.

    2013-07-01

    We have developed a moment-based scale-bridging algorithm for thermal radiative transfer problems. The algorithm takes the form of well-known nonlinear-diffusion acceleration which utilizes a low-order (LO) continuum problem to accelerate the solution of a high-order (HO) kinetic problem. The coupled nonlinear equations that form the LO problem are efficiently solved using a preconditioned Jacobian-free Newton-Krylov method. This work demonstrates the applicability of the scale-bridging algorithm with a Monte Carlo HO solver and reports the computational efficiency of the algorithm in comparison to the well-known Fleck-Cummings algorithm. (authors)

  15. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  16. Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"

    EPA Science Inventory

    Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...

  17. Adjoint Based Data Assimilation for an Ionospheric Model

    NASA Astrophysics Data System (ADS)

    Rosen, I. G.; Hajj, G. A.; Hajj, G. A.; Pi, X.; Pi, X.; Wang, C.; Wilson, B. D.

    2001-05-01

    The success of ionospheric modeling depends primarily on accurate knowledge of the forces (drivers) which enter into the collisional plasma hydrodynamic equations for the ionosphere and control the ionization as well as other dynamical and chemical processes. These include solar EUV and UV radiation, magnetospheric electric fields, particle precipitation, dynamo electric fields, thermospheric winds, neutral densities, and temperature. The determination of these model parameters from observational data is known as data assimilation. The data assimilation problem is formulated as a problem of minimizing a nonlinear functional, J (typically least squares) under a system of constraints consisting primarily of the underlying model equations. The performance index, J, can, in principle, be minimized using standard techniques such as the Newton's steepest decent method. There are however major technical challenges in practice. Since J is highly nonlinear and each evaluation of the functional requires the integration of the ionospheric model equations, computing the gradient vector of J with respect to the unknown parameters is time consuming. This problem is solved by use of the adjoint method. The ionospheric model used in this effort is for mid- and low-latitudes and consists of solving the continuity and momentum partial differential equations in four dimensional (three spatial dimensions and time) to compute the O+ density in the ionosphere and plasmasphere. We have developed codes for solving the forward model on a fixed grid. This makes it relatively straight forward to apply the adjoint method for computing gradients when doing nonlinear least squares based data assimilation. Because of the significant cost (in computational effort and CPU time) involved in performing a forward integration of the underlying 3-D model at any reasonable grid resolution, the use of the adjoint method for computing the gradients is indispensable. The adjoint method provides an elegant

  18. A Nested Genetic Algorithm for the Numerical Solution of Non-Linear Coupled Equations in Water Quality Modeling

    NASA Astrophysics Data System (ADS)

    García, Hermes A.; Guerrero-Bolaño, Francisco J.; Obregón-Neira, Nelson

    2010-05-01

    Due to both mathematical tractability and efficiency on computational resources, it is very common to find in the realm of numerical modeling in hydro-engineering that regular linearization techniques have been applied to nonlinear partial differential equations properly obtained in environmental flow studies. Sometimes this simplification is also made along with omission of nonlinear terms involved in such equations which in turn diminishes the performance of any implemented approach. This is the case for example, for contaminant transport modeling in streams. Nowadays, a traditional and one of the most common used water quality model such as QUAL2k, preserves its original algorithm, which omits nonlinear terms through linearization techniques, in spite of the continuous algorithmic development and computer power enhancement. For that reason, the main objective of this research was to generate a flexible tool for non-linear water quality modeling. The solution implemented here was based on two genetic algorithms, used in a nested way in order to find two different types of solutions sets: the first set is composed by the concentrations of the physical-chemical variables used in the modeling approach (16 variables), which satisfies the non-linear equation system. The second set, is the typical solution of the inverse problem, the parameters and constants values for the model when it is applied to a particular stream. From a total of sixteen (16) variables, thirteen (13) was modeled by using non-linear coupled equation systems and three (3) was modeled in an independent way. The model used here had a requirement of fifty (50) parameters. The nested genetic algorithm used for the numerical solution of a non-linear equation system proved to serve as a flexible tool to handle with the intrinsic non-linearity that emerges from the interactions occurring between multiple variables involved in water quality studies. However because there is a strong data limitation in

  19. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    required by human intervention and analysis. Specifying an objective functional that quantifies the misfit between the simulation outcome and known constraints and then minimizing it through numerical optimization can serve as an automated technique for parameter identification. As suggested by the similarity in formulation, the numerical algorithm is closely related to the one used for goal-oriented error estimation. One common point is that the so-called adjoint equation needs to be solved numerically. We will outline the derivation and implementation of these methods and discuss some of their pros and cons, supported by numerical results.

  20. Adjoint sensitivity method for the downward continuation of the Earth's geomagnetic field through an electrically conducting mantle

    NASA Astrophysics Data System (ADS)

    Hagedoorn, J. M.; Martinec, Z.

    2012-12-01

    Recent models of the Earth's geomagnetic field at the core-mantle boundary (CMB) are based on satellite measurements and/or observatory data, which are mostly harmonically downward continued to the CMB. One aim of the upcoming satellite mission Swarm is to determine the three-dimensional distribution of electric conductivity of the Earth's mantle. On this background, we developed an adjoint sensitivity downward continuation approach that is capable to consider three-dimensional electric conductivity distributions. Martinec (Geophys. J. Int., 136, 1999) developed a time-domain spectral-finite element approach for the forward modelling of vector electromagnetic induction data as measured on ground-based magnetic observatory or by satellites. We design a new method to compute the sensitivity of the magnetic induction data to a magnetic field prescribed at the core-mantle boundary, which we term the adjoint sensitivity method. The forward and adjoint initial boundary-value problems, both solved in the time domain, are identical, except for the specification of prescribed boundary conditions. The respective boundary-value data are the measured X magnetic component for the forward method and the difference between the measured and predicted Z magnetic component for the adjoint method. The squares of the differences in Z magnetic component summed up over the time of observation and all spatial positions of observations determine the misfit. Then the sensitivities of observed data, i.e. the partial derivatives of the misfit with respect to the parameters characterizing the magnetic field at the core-mantle boundary, are obtained by the surface integral over the core-mantle boundary of the product of the adjoint solution multiplied by the time-dependent functions describing the time variability of magnetic field at the core-mantle boundary, and integrated over the time of observation. The time variability of boundary data is represented in terms of locally supported B

  1. Further improved algorithm for the solution of the nonlinear Poisson equation in semiconductor devices

    NASA Astrophysics Data System (ADS)

    Ouwerling, G. J. L.

    1989-12-01

    This paper gives a concise overview of some existing methods for the solution of the nonlinear Poisson equation in semiconductors. A method for the solution of this equation was recently proposed in this journal by I. D. Mayergoyz [J. Appl. Phys. 59, 195 (1986)]. Soon afterwards, an improved version was described by W. Keller [J. Appl. Phys. 61, 5189 (1987)]. Both methods are classified within the perspective of the existing methods. Moreover, Keller's method is further improved by the introduction of scaled variables and by using red-black ordening to allow for overrelaxation. All advantages of the two methods are maintained. An illustrative example shows an improvement in solution speed of at least a factor of 5.6.

  2. A local anisotropic adaptive algorithm for the solution of low-Mach transient combustion problems

    NASA Astrophysics Data System (ADS)

    Carpio, Jaime; Prieto, Juan Luis; Vera, Marcos

    2016-02-01

    A novel numerical algorithm for the simulation of transient combustion problems at low Mach and moderately high Reynolds numbers is presented. These problems are often characterized by the existence of a large disparity of length and time scales, resulting in the development of directional flow features, such as slender jets, boundary layers, mixing layers, or flame fronts. This makes local anisotropic adaptive techniques quite advantageous computationally. In this work we propose a local anisotropic refinement algorithm using, for the spatial discretization, unstructured triangular elements in a finite element framework. For the time integration, the problem is formulated in the context of semi-Lagrangian schemes, introducing the semi-Lagrange-Galerkin (SLG) technique as a better alternative to the classical semi-Lagrangian (SL) interpolation. The good performance of the numerical algorithm is illustrated by solving a canonical laminar combustion problem: the flame/vortex interaction. First, a premixed methane-air flame/vortex interaction with simplified transport and chemistry description (Test I) is considered. Results are found to be in excellent agreement with those in the literature, proving the superior performance of the SLG scheme when compared with the classical SL technique, and the advantage of using anisotropic adaptation instead of uniform meshes or isotropic mesh refinement. As a more realistic example, we then conduct simulations of non-premixed hydrogen-air flame/vortex interactions (Test II) using a more complex combustion model which involves state-of-the-art transport and chemical kinetics. In addition to the analysis of the numerical features, this second example allows us to perform a satisfactory comparison with experimental visualizations taken from the literature.

  3. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  4. Adjoint Tomography of Taiwan Region: From Travel-Time Toward Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Huang, H. H.; Lee, S. J.; Tromp, J.

    2014-12-01

    The complicated tectonic environment such as Taiwan region can modulate the seismic waveform severely and hamper the discrimination and the utilization of later phases. Restricted to the use of only first arrivals of P- and S-wave, the travel-time tomographic models of Taiwan can simulate the seismic waveform barely to a frequency of 0.2 Hz to date. While it has been sufficient for long-period studies, e.g. source inversion, this frequency band is still far from the applications to the community and high-resolution studies. To achieve a higher-frequency simulation, more data and the considerations of off-path and finite-frequency effects are necessary. Based on the spectral-element and the adjoint method recently developed, we prepared 94 MW 3.5-6.0 earthquakes with well-defined location and focal mechanism solutions from Real-Time Moment Tensor Monitoring System (RMT), and preformed an iterative gradient-based inversion employing waveform modeling and finite-frequency measurements of adjoint method. By which the 3-D sensitivity kernels are taken into account realistically and the full waveform information are naturally sought, without a need of any phase pick. A preliminary model m003 using 10-50 sec data was demonstrated and compared with previous travel-time models. The primary difference appears in the mountainous area, where the previous travel-time model may underestimate the S-wave speed in the upper crust, but overestimates in the lower crust.

  5. Adjoint Optimization of Multistage Axial Compressor Blades with Static Pressure Constraint at Blade Row Interface

    NASA Astrophysics Data System (ADS)

    Yu, Jia; Ji, Lucheng; Li, Weiwei; Yi, Weilin

    2016-06-01

    Adjoint method is an important tool for design refinement of multistage compressors. However, the radial static pressure distribution deviates during the optimization procedure and deteriorates the overall performance, producing final designs that are not well suited for realistic engineering applications. In previous development work on multistage turbomachinery blade optimization using adjoint method and thin shear-layer N-S equations, the entropy production is selected as the objective function with given mass flow rate and total pressure ratio as imposed constraints. The radial static pressure distribution at the interfaces between rows is introduced as a new constraint in the present paper. The approach is applied to the redesign of a five-stage axial compressor, and the results obtained with and without the constraint on the radial static pressure distribution at the interfaces between rows are discussed in detail. The results show that the redesign without the radial static pressure distribution constraint (RSPDC) gives an optimal solution that shows deviations on radial static pressure distribution, especially at rotor exit tip region. On the other hand, the redesign with the RSPDC successfully keeps the radial static pressure distribution at the interfaces between rows and make sure that the optimization results are applicable in a practical engineering design.

  6. Aerodynamic Shape Optimization of Complex Aircraft Configurations via an Adjoint Formulation

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Farmer, James; Martinelli, Luigi; Saunders, David

    1996-01-01

    This work describes the implementation of optimization techniques based on control theory for complex aircraft configurations. Here control theory is employed to derive the adjoint differential equations, the solution of which allows for a drastic reduction in computational costs over previous design methods (13, 12, 43, 38). In our earlier studies (19, 20, 22, 23, 39, 25, 40, 41, 42) it was shown that this method could be used to devise effective optimization procedures for airfoils, wings and wing-bodies subject to either analytic or arbitrary meshes. Design formulations for both potential flows and flows governed by the Euler equations have been demonstrated, showing that such methods can be devised for various governing equations (39, 25). In our most recent works (40, 42) the method was extended to treat wing-body configurations with a large number of mesh points, verifying that significant computational savings can be gained for practical design problems. In this paper the method is extended for the Euler equations to treat complete aircraft configurations via a new multiblock implementation. New elements include a multiblock-multigrid flow solver, a multiblock-multigrid adjoint solver, and a multiblock mesh perturbation scheme. Two design examples are presented in which the new method is used for the wing redesign of a transonic business jet.

  7. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  8. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    PubMed

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. PMID:24139308

  9. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2008-04-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are consistent with those of the analytical inversion when averaged over the large regions of the latter. Unlike the analytical solution, the adjoint correction factors are not subject to compensating errors between adjacent regions (error anticorrelation). The adjoint solution also reveals fine-scale variability that the analytical inversion cannot resolve. For example, India shows both large emissions underestimates in the densely populated Ganges Valley and large overestimates in the eastern part of the country where springtime emissions are dominated by biomass burning. Correction factors to Chinese emissions are largest in central and eastern China, consistent with a recent bottom-up inventory though there are disagreements in the fine structure. Correction factors for biomass burning are consistent with a recent bottom-up inventory based on MODIS satellite fire data.

  10. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the

  11. Towards Multi-resolution Adjoint Tomography of the European Crust and Upper Mantle

    NASA Astrophysics Data System (ADS)

    Basini, P.; Nissen-Meyer, T.; Boschi, L.; Schenk, O.; Verbeke, J.; Hanasoge, S.; Giardini, D.

    2010-12-01

    Thanks to continuously improved instrument coverage, and the growth of high-performance computational infrastructure, it is now possible to enhance the resolution at which seismologists image the Earth's interior. While most algorithms in global seismic tomography today are grounded on the ray-theory approximation, however, resolution and model complexity can effectively be enhanced only through the application of more advanced techniques accounting for the many complexities of the partial derivatives relating seismic data and Earth structure. These include full-wave forward modelling methods and adjoint algorithms, which together set a framework for iterative, nonlinear inversion upon complex 3D structures. We take advantage of these methodological improvements using a newly developed, flexible spectral-element method (SPECFEM3D) with embedded adjoint capabilities to devise new tomographic models of the European crust and upper mantle. We chose a two-scale strategy, in which we use global surface wave data to first constrain the large-scale structures, and simultaneously invert for high-resolution, regional structures based on measurements of ambient noise in central and southern Europe. By its very nature, and as a result of the dense station coverage over the continent, the ambient-noise method affords us a particularly uniform seismic coverage. To define surface-wave sensitivity kernels, we construct a flexible, global mesh of the upper mantle only (i.e., a spherical shell) honoring all global discontinuities, and include 3D starting models down to periods of 30 seconds. The noise data are cross-correlated to obtain station-to-station Green's functions. We will present examples of sensitivity kernels computed for these noise-based Green's functions and discuss the data-specific validity of the underlying assumptions to extract Green's functions. The local setup, which is constructed using the same software as in the global case, needs to honor internal and

  12. Development of Web-Based Menu Planning Support System and its Solution Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kashima, Tomoko; Matsumoto, Shimpei; Ishii, Hiroaki

    2009-10-01

    Recently lifestyle-related diseases have become an object of public concern, while at the same time people are being more health conscious. As an essential factor for causing the lifestyle-related diseases, we assume that the knowledge circulation on dietary habits is still insufficient. This paper focuses on everyday meals close to our life and proposes a well-balanced menu planning system as a preventive measure of lifestyle-related diseases. The system is developed by using a Web-based frontend and it provides multi-user services and menu information sharing capabilities like social networking services (SNS). The system is implemented on a Web server running Apache (HTTP server software), MySQL (database management system), and PHP (scripting language for dynamic Web pages). For the menu planning, a genetic algorithm is applied by understanding this problem as multidimensional 0-1 integer programming.

  13. Low-resolution structures of proteins in solution retrieved from X-ray scattering with a genetic algorithm.

    PubMed Central

    Chacón, P; Morán, F; Díaz, J F; Pantos, E; Andreu, J M

    1998-01-01

    Small-angle x-ray solution scattering (SAXS) is analyzed with a new method to retrieve convergent model structures that fit the scattering profiles. An arbitrary hexagonal packing of several hundred beads containing the problem object is defined. Instead of attempting to compute the Debye formula for all of the possible mass distributions, a genetic algorithm is employed that efficiently searches the configurational space and evolves best-fit bead models. Models from different runs of the algorithm have similar or identical structures. The modeling resolution is increased by reducing the bead radius together with the search space in successive cycles of refinement. The method has been tested with protein SAXS (0.001 < S < 0.06 A(-1)) calculated from x-ray crystal structures, adding noise to the profiles. The models obtained closely approach the volumes and radii of gyration of the known structures, and faithfully reproduce the dimensions and shape of each of them. This includes finding the active site cavity of lysozyme, the bilobed structure of gamma-crystallin, two domains connected by a stalk in betab2-crystallin, and the horseshoe shape of pancreatic ribonuclease inhibitor. The low-resolution solution structure of lysozyme has been directly modeled from its experimental SAXS profile (0.003 < S < 0.03 A(-1)). The model describes lysozyme size and shape to the resolution of the measurement. The method may be applied to other proteins, to the analysis of domain movements, to the comparison of solution and crystal structures, as well as to large macromolecular assemblies. PMID:9635731

  14. Including the adjoint model of the moist physics in the 4D-Var in NASA's GEOS-5 Global Circulation Model

    NASA Astrophysics Data System (ADS)

    Holdaway, D. R.; Errico, R.

    2011-12-01

    Inherent in the minimization process in the 4D-Var data assimilation system is the need for the model's adjoint. It is straightforward to obtain the exact adjoint by linearizing the code in a line by line sense; however it only provides an accurate overall representation of the physical processes if the model behaviour is linear. Moist processes in the atmosphere, and thus the models that represent them, are intrinsically highly non-linear and can contain discrete switches. The adjoint that is required in the data assimilation system needs to provide an accurate representation of the physical behaviour for perturbation sizes of the order of the analysis error, so an exact adjoint of the moist physics model is likely to be inaccurate. Instead a non-exact adjoint model, which is accurate for large enough perturbations, must be developed. The constraint on the development is that the simplified adjoint be consistent with the actual trajectory of the model. Previous attempts to include the moist physics in the 4D-Var have emphasized the need for redevelopment of the actual moist scheme to a simpler version. These schemes are designed to be linear in the limit of realistic perturbation size but also capture the essence of the physical behaviour, making the adjoint version of the scheme suitable for use in the 4D-Var. A downside to this approach is that it can result in an over simplification of the physics and represent a larger departure from the true model trajectory than necessary. The adjoint is just the transpose of the tangent linear model, which is the differential of the model operator. This differential of the operator can be constructed from Jacobian matrices. Examining the structures of the Jacobians as perturbations of varying size are added to the state vector can help determine whether the adjoint model - be it of actual or simplified physics - will be suitable for use in the assimilation algorithm. If Jacobian structures change considerably when the

  15. A deflation based parallel algorithm for spectral element solution of the incompressible Navier-Stokes equations

    SciTech Connect

    Fischer, P.F.

    1996-12-31

    Efficient solution of the Navier-Stokes equations in complex domains is dependent upon the availability of fast solvers for sparse linear systems. For unsteady incompressible flows, the pressure operator is the leading contributor to stiffness, as the characteristic propagation speed is infinite. In the context of operator splitting formulations, it is the pressure solve which is the most computationally challenging, despite its elliptic origins. We seek to improve existing spectral element iterative methods for the pressure solve in order to overcome the slow convergence frequently observed in the presence of highly refined grids or high-aspect ratio elements.

  16. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  17. Examination of Observation Impacts derived from OSEs and Adjoint Models

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2008-01-01

    With the adjoint of a data assimilation system, the impact of any or all assimilated observations on measures of forecast skill can be estimated accurately and efficiently. The approach allows aggregation of results in terms of individual data types, channels or locations, all computed simultaneously. In this study, adjoint-based estimates of observation impact are compared with results from standard observing system experiments (OSEs) in the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) GEOS-5 system. The two approaches are shown to provide unique, but complimentary, information. Used together, they reveal both redundancies and dependencies between observing system impacts as observations are added or removed. Understanding these dependencies poses a major challenge for optimizing the use of the current observational network and defining requirements for future observing systems.

  18. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  19. On improving storm surge forecasting using an adjoint optimal technique

    NASA Astrophysics Data System (ADS)

    Li, Yineng; Peng, Shiqiu; Yan, Jing; Xie, Lian

    2013-12-01

    A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.

  20. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    SciTech Connect

    Hep, J.; Konecna, A.; Krysl, V.; Smutny, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weighting is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV

  1. Benchmarking algorithms for the solution of Collisional Radiative Model (CRM) equations.

    NASA Astrophysics Data System (ADS)

    Klapisch, Marcel; Busquet, Michel

    2007-11-01

    Elements used in ICF target designs can have many charge states in the same plasma conditions, each charge state having numerous energy levels. When LTE conditions are not met, one has to solve CRM equations for the populations of energy levels, which are necessary for opacities/emissivities, Z* etc. In case of sparse spectra, or when configuration interaction is important (open d or f shells), statistical methods[1] are insufficient. For these cases one must resort to a detailed level CRM rate generator[2]. The equations to be solved may involve tens of thousands of levels. The system is by nature ill conditioned. We show that some classical methods do not converge. Improvements of the latter will be compared with new algorithms[3] with respect to performance, robustness, and accuracy. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Q. S. R. T.,65, 43 (2000). [2] M Klapisch, M Busquet and A. Bar-Shalom, Proceedings of APIP'07, AIP series (to be published). [3] M Klapisch and M Busquet, High Ener. Density Phys. 3,143 (2007)

  2. Inverse scattering solutions by a sinc basis, multiple source, moment method--Part III: Fast algorithms.

    PubMed

    Johnson, S A; Zhou, Y; Tracy, M K; Berggren, M J; Stenger, F

    1984-01-01

    olving the inverse scattering problem for the Helmholtz wave equation without employing the Born or Rytov approximations is a challenging problem, but some slow iterative methods have been proposed. One such method suggested by us is based on solving systems of nonlinear algebraic equations that are derived by applying the method of moments to a sinc basis function expansion of the fields and scattering potential. In the past, we have solved these equations for a 2-D object of n by n pixels in a time proportional to n5. In the present paper, we demonstrate a new method based on FFT convolution and the concept of backprojection which solves these equations in time proportional to n3 X log(n). Several numerical examples are given for images up to 7 by 7 pixels in size. Analogous algorithms to solve the Riccati wave equation in n3 X log(n) time are also suggested, but not verified. A method is suggested for interpolating measurements from one detector geometry to a new perturbed detector geometry whose measurement points fall on a FFT accessible, rectangular grid and thereby render many detector geometrics compatible for use by our fast methods. PMID:6540908

  3. Seismic Window Selection and Misfit Measurements for Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lei, W.; Bozdag, E.; Lefebvre, M.; Podhorszki, N.; Smith, J. A.; Tromp, J.

    2013-12-01

    Global Adjoint Tomography requires fast parallel processing of large datasets. After obtaing the preprocessed observed and synthetic seismograms, we use the open source software packages FLEXWIN (Maggi et al. 2007) to select time windows and MEASURE_ADJ to make measurements. These measurements define adjoint sources for data assimilation. Previous versions of these tools work on a pair of SAC files---observed and synthetic seismic data for the same component and station, and loop over all seismic records associated with one earthquake. Given the large number of stations and earthquakes, the frequent read and write operations create severe I/O bottlenecks on modern computing platforms. We present new versions of these tools utilizing a new seismic data format, namely the Adaptive Seismic Data Format(ASDF). This new format shows superior scalability for applications on high-performance computers and accommodates various types of data, including earthquake, industry and seismic interferometry datasets. ASDF also provides user-friendly APIs, which can be easily integrated into the adjoint tomography workflow and combined with other data processing tools. In addition to solving the I/O bottleneck, we are making several improvements to these tools. For example, FLEXWIN is tuned to select windows for different types of earthquakes. To capture their distinct features, we categorize earthquakes by their depths and frequency bands. Moreover, instead of only picking phases between the first P arrival and the surface-wave arrivals, our aim is to select and assimilate many other later prominent phases in adjoint tomography. For example, in the body-wave band (17 s - 60 s), we include SKS, sSKS and their multiple, while in the surface-wave band (60 s - 120 s) we incorporate major-arc surface waves.

  4. A comparison of adjoint and data-centric verification techniques.

    SciTech Connect

    Wildey, Timothy Michael; Cyr, Eric Christopher; Shadid, John Nicolas; Pawlowski, Roger Patrick; Smith, Thomas Michael

    2013-03-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. We compare the adjoint-based a posteriori error estimation approach with a recent variant of a data-centric verification technique. We provide a brief overview of each technique and then we discuss their relative advantages and disadvantages. We use Drekar::CFD to produce numerical results for steady-state Navier Stokes and SARANS approximations. 3

  5. Monopole condensation in two-flavor adjoint QCD

    SciTech Connect

    Cossu, Guido; D'Elia, Massimo; Di Giacomo, Adriano; Lacagnina, Giuseppe; Pica, Claudio

    2008-04-01

    In QCD with adjoint fermions, the deconfining transition takes place at a lower temperature than the chiral transition. We study the two transitions by use of the Polyakov loop, the monopole order parameter, and the chiral condensate. The deconfining transition is first order, the chiral is a crossover. The order parameters for confinement are not affected by the chiral transition. We conclude that the degrees of freedom relevant to confinement are different from those describing chiral symmetry.

  6. Spectral monodromy of non-self-adjoint operators

    NASA Astrophysics Data System (ADS)

    Phan, Quang Sang

    2014-01-01

    In the present paper, we build a combinatorial invariant, called the "spectral monodromy" from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc ["Quantum monodromy in integrable systems," Commun. Math. Phys. 203(2), 465-479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat ["On global action-angle coordinates," Commun. Pure Appl. Math. 33(6), 687-706 (1980)].

  7. Adjoint-based sensitivity analysis for reactor-safety applications

    SciTech Connect

    Parks, C.V.

    1985-01-01

    The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. Finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which is typically employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalculations using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis.

  8. Self-adjoint time operators and invariant subspaces

    NASA Astrophysics Data System (ADS)

    Gómez, Fernando

    2008-02-01

    The question of existence of self-adjoint time operators for unitary evolutions in classical and quantum mechanics is revisited on the basis of Halmos-Helson theory of invariant subspaces, Sz.-Nagy-Foiaş dilation theory and Misra-Prigogine-Courbage theory of irreversibility. It is shown that the existence of self-adjoint time operators is equivalent to the intertwining property of the evolution plus the existence of simply invariant subspaces or rigid operator-valued functions for its Sz.-Nagy-Foiaş functional model. Similar equivalent conditions are given in terms of intrinsic randomness in the context of statistical mechanics. The rest of the contents are mainly a unifying review of the subject scattered throughout an unconnected literature. A well-known extensive set of equivalent conditions is derived from the above results; such conditions are written in terms of Schrrdinger couples, the Weyl commutation relation, incoming and outgoing subspaces, innovation processes, Lax-Phillips scattering, translation and spectral representations, and spectral properties. Also the natural procedure dealing with symmetric time operators in standard quantum mechanics involving their self-adjoint extensions is illustrated by considering the quantum Aharonov-Bohm time-of-arrival operator.

  9. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  10. Spectral monodromy of non-self-adjoint operators

    SciTech Connect

    Phan, Quang Sang

    2014-01-15

    In the present paper, we build a combinatorial invariant, called the “spectral monodromy” from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc [“Quantum monodromy in integrable systems,” Commun. Math. Phys. 203(2), 465–479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat [“On global action-angle coordinates,” Commun. Pure Appl. Math. 33(6), 687–706 (1980)].

  11. Integrated algorithms for RFID-based multi-sensor indoor/outdoor positioning solutions

    NASA Astrophysics Data System (ADS)

    Zhu, Mi.; Retscher, G.; Zhang, K.

    2011-12-01

    Position information is very important as people need it almost everywhere all the time. However, it is a challenging task to provide precise positions indoor/outdoor seamlessly. Outdoor positioning has been widely studied and accurate positions can usually be achieved by well developed GPS techniques but these techniques are difficult to be used indoors since GPS signal reception is limited. The alternative techniques that can be used for indoor positioning include, to name a few, Wireless Local Area Network (WLAN), bluetooth and Ultra Wideband (UWB) etc.. However, all of these have limitations. The main objectives of this paper are to investigate and develop algorithms for a low-cost and portable indoor personal positioning system using Radio Frequency Identification (RFID) and its integration with other positioning systems. An RFID system consists of three components, namely a control unit, an interrogator and a transponder that transmits data and communicates with the reader. An RFID tag can be incorporated into a product, animal or person for the purpose of identification and tracking using radio waves. In general, for RFID positioning in urban and indoor environments three different methods can be used, including cellular positioning, trilateration and location fingerprinting. In addition, the integration of RFID with other technologies is also discussed in this paper. A typical combination is to integrate RFID with relative positioning technologies such as MEMS INS to bridge the gaps between RFID tags for continuous positioning applications. Experiments are shown to demonstrate the improvements of integrating multiple sensors with RFID which can be employed successfully for personal positioning.

  12. A generic implementation of replica exchange with solute tempering (REST2) algorithm in NAMD for complex biophysical simulations

    NASA Astrophysics Data System (ADS)

    Jo, Sunhwan; Jiang, Wei

    2015-12-01

    Replica Exchange with Solute Tempering (REST2) is a powerful sampling enhancement algorithm of molecular dynamics (MD) in that it needs significantly smaller number of replicas but achieves higher sampling efficiency relative to standard temperature exchange algorithm. In this paper, we extend the applicability of REST2 for quantitative biophysical simulations through a robust and generic implementation in greatly scalable MD software NAMD. The rescaling procedure of force field parameters controlling REST2 "hot region" is implemented into NAMD at the source code level. A user can conveniently select hot region through VMD and write the selection information into a PDB file. The rescaling keyword/parameter is written in NAMD Tcl script interface that enables an on-the-fly simulation parameter change. Our implementation of REST2 is within communication-enabled Tcl script built on top of Charm++, thus communication overhead of an exchange attempt is vanishingly small. Such a generic implementation facilitates seamless cooperation between REST2 and other modules of NAMD to provide enhanced sampling for complex biomolecular simulations. Three challenging applications including native REST2 simulation for peptide folding-unfolding transition, free energy perturbation/REST2 for absolute binding affinity of protein-ligand complex and umbrella sampling/REST2 Hamiltonian exchange for free energy landscape calculation were carried out on IBM Blue Gene/Q supercomputer to demonstrate efficacy of REST2 based on the present implementation.

  13. Nonlinear self-adjointness and conservation laws of Klein-Gordon-Fock equation with central symmetry

    NASA Astrophysics Data System (ADS)

    Abdulwahhab, Muhammad Alim

    2015-05-01

    The concept of nonlinear self-adjointness, introduced by Ibragimov, has significantly extends approaches to constructing conservation laws associated with symmetries since it incorporates the strict self-adjointness, the quasi self-adjointness as well as the usual linear self-adjointness. Using this concept, the nonlinear self-adjointness condition for the Klein-Gordon-Fock equation was established and subsequently used to construct simplified but infinitely many nontrivial and independent conserved vectors. The Noether's theorem was further applied to the Klein-Gordon-Fock equation to explore more distinct first integrals, result shows that conservation laws constructed through this approach are exactly the same as those obtained under strict self-adjointness of Ibragimov's method.

  14. Numerical solution of an optimal control problem governed by three-phase non-isothermal flow equations

    NASA Astrophysics Data System (ADS)

    Temirbekov, Nurlan M.; Baigereyev, Dossan R.

    2016-08-01

    The paper focuses on the numerical implementation of a model optimal control problem governed by equations of three-phase non-isothermal flow in porous media. The objective is to achieve preassigned temperature distribution along the reservoir at a given time of development by controlling mass flow rate of heat transfer agent on the injection well. The problem of optimal control is formulated, the adjoint problem is presented, and an algorithm for the numerical solution is proposed. Results of computational experiments are presented for a test problem.

  15. Using Adjoint Methods to Improve 3-D Velocity Models of Southern California

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.

    2006-12-01

    We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical

  16. Adjoint-Based Sensitivity Maps for the Nearshore

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Ngodock, Hans

    2013-04-01

    The wave model SWAN (Booij et al., 1999) solves the spectral action balance equation to produce nearshore wave forecasts and climatologies. It is widely used by the coastal modeling community and is part of a variety of coupled ocean-wave-atmosphere model systems. A variational data assimilation system (Orzech et al., 2013) has recently been developed for SWAN and is presently being transitioned to operational use by the U.S. Naval Oceanographic Office. This system is built around a numerical adjoint to the fully nonlinear, nonstationary SWAN code. When provided with measured or artificial "observed" spectral wave data at a location of interest on a given nearshore bathymetry, the adjoint can compute the degree to which spectral energy levels at other locations are correlated with - or "sensitive" to - variations in the observed spectrum. Adjoint output may be used to construct a sensitivity map for the entire domain, tracking correlations of spectral energy throughout the grid. When access is denied to the actual locations of interest, sensitivity maps can be used to determine optimal alternate locations for data collection by identifying regions of greatest sensitivity in the mapped domain. The present study investigates the properties of adjoint-generated sensitivity maps for nearshore wave spectra. The adjoint and forward SWAN models are first used in an idealized test case at Duck, NC, USA, to demonstrate the system's effectiveness at optimizing forecasts of shallow water wave spectra for an inaccessible surf-zone location. Then a series of simulations is conducted for a variety of different initializing conditions, to examine the effects of seasonal changes in wave climate, errors in bathymetry, and variations in size and shape of the inaccessible region of interest. Model skill is quantified using two methods: (1) a more traditional correlation of observed and modeled spectral statistics such as significant wave height, and (2) a recently developed RMS

  17. Algorithmic solution for autonomous vision-based off-road navigation

    NASA Astrophysics Data System (ADS)

    Kolesnik, Marina; Paar, Gerhard; Bauer, Arnold; Ulm, Michael

    1998-07-01

    A vision based navigation system is a basic tool to provide autonomous operations of unmanned vehicles. For offroad navigation that means that the vehicle equipped with a stereo vision system and perhaps a laser ranging device shall be able to maintain a high level of autonomy under various illumination conditions and with little a priori information about the underlying scene. The task becomes particularly important for unmanned planetary exploration with the help of autonomous rovers. For example in the LEDA Moon exploration project currently under focus by the European Space Agency (ESA), during the autonomous mode the vehicle (rover) should perform the following operations: on-board absolute localization, elevation model (DEM) generation, obstacle detection and relative localization, global path planning and execution. Focus of this article is a computational solution for fully autonomous path planning and path execution. An operational DEM generation method based on stereoscopy is introduced. Self-localization on the DEM and robust natural feature tracking are used as basic navigation steps, supported by inertial sensor systems. The following operations are performed on the basis of stereo image sequences: 3D scene reconstruction, risk map generation, local path planning, camera position update during the motion on the basis of landmarks tracking, obstacle avoidance. Experimental verification is done with the help of a laboratory terrain mockup and a high precision camera mounting device. It is shown that standalone tracking using automatically identified landmarks is robust enough to give navigation data for further stereoscopic reconstruction of the surrounding terrain. Iterative tracking and reconstruction leads to a complete description of the vehicle path and its surrounding with an accuracy high enough to meet the specifications for autonomous outdoor navigation.

  18. Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Rusmanugroho, H.; Tromp, J.

    2014-12-01

    Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.

  19. Reciprocal Grids: A Hierarchical Algorithm for Computing Solution X-ray Scattering Curves from Supramolecular Complexes at High Resolution.

    PubMed

    Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri

    2016-08-22

    In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data. PMID:27410762

  20. A weighted adjoint-source for weight-window generation by means of a linear tally combination

    SciTech Connect

    Sood, Avneet; Booth, Thomas E; Solomon, Clell J

    2009-01-01

    A new importance estimation technique has been developed that allows weight-window optimization for a linear combination of tallies. This technique has been implemented in a local version of MCNP and effectively weights the adjoint source term for each tally in the combination. Optimizing weight window parameters for the linear tally combination allows the user to optimize weight windows for multiple regions at once. In this work, we present our results of solutions to an analytic three-tally-region test problem and a flux calculation on a 100,000 voxel oil-well logging tool problem.

  1. Nonlinear acceleration of a continuous finite element discretization of the self-adjoint angular flux form of the transport equation

    SciTech Connect

    Sanchez, R.

    2012-07-01

    Nonlinear acceleration of a continuous finite element (CFE) discretization of the transport equation requires a modification of the transport solution in order to achieve local conservation, a condition used in nonlinear acceleration to define the stopping criterion. In this work we implement a coarse-mesh finite difference acceleration for a CFE discretization of the second-order self adjoint angular flux (SAAF) form of the transport equation and use a post processing to enforce local conservation. Numerical results are given for one-group source calculations of one-dimensional slabs. We also give a formal derivation of the boundary conditions for the SAAF. (authors)

  2. Nonlinear Acceleration of a Continuous Finite Element Discretization of the Self-Adjoint Angular Flux Form of the Transport Equation

    SciTech Connect

    Richard Sanchez; Cristian Rabiti; Yaqi Wang

    2013-11-01

    Nonlinear acceleration of a continuous finite element (CFE) discretization of the transport equation requires a modification of the transport solution in order to achieve local conservation, a condition used in nonlinear acceleration to define the stopping criterion. In this work we implement a coarse-mesh finite difference acceleration for a CFE discretization of the second-order self-adjoint angular flux (SAAF) form of the transport equation and use a postprocessing to enforce local conservation. Numerical results are given for one-group source calculations of one-dimensional slabs. We also give a novel formal derivation of the boundary conditions for the SAAF.

  3. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  4. Computational solution of spike overlapping using data-based subtraction algorithms to resolve synchronous sympathetic nerve discharge

    PubMed Central

    Su, Chun-Kuei; Chiang, Chia-Hsun; Lee, Chia-Ming; Fan, Yu-Pei; Ho, Chiu-Ming; Shyu, Liang-Yu

    2013-01-01

    Sympathetic nerves conveying central commands to regulate visceral functions often display activities in synchronous bursts. To understand how individual fibers fire synchronously, we establish “oligofiber recording techniques” to record “several” nerve fiber activities simultaneously, using in vitro splanchnic sympathetic nerve–thoracic spinal cord preparations of neonatal rats as experimental models. While distinct spike potentials were easily recorded from collagenase-dissociated sympathetic fibers, a problem arising from synchronous nerve discharges is a higher incidence of complex waveforms resulted from spike overlapping. Because commercial softwares do not provide an explicit solution for spike overlapping, a series of custom-made LabVIEW programs incorporated with MATLAB scripts was therefore written for spike sorting. Spikes were represented as data points after waveform feature extraction and automatically grouped by k-means clustering followed by principal component analysis (PCA) to verify their waveform homogeneity. For dissimilar waveforms with exceeding Hotelling's T2 distances from the cluster centroids, a unique data-based subtraction algorithm (SA) was used to determine if they were the complex waveforms resulted from superimposing a spike pattern close to the cluster centroid with the other signals that could be observed in original recordings. In comparisons with commercial software, higher accuracy was achieved by analyses using our algorithms for the synthetic data that contained synchronous spiking and complex waveforms. Moreover, both T2-selected and SA-retrieved spikes were combined as unit activities. Quantitative analyses were performed to evaluate if unit activities truly originated from single fibers. We conclude that applications of our programs can help to resolve synchronous sympathetic nerve discharges (SND). PMID:24198782

  5. The development of three-dimensional adjoint method for flow control with blowing in convergent-divergent nozzle flows

    NASA Astrophysics Data System (ADS)

    Sikarwar, Nidhi

    multiple experiments or numerical simulations. Alternatively an inverse design method can be used. An adjoint optimization method can be used to achieve the optimum blowing rate. It is shown that the method works for both geometry optimization and active control of the flow in order to deflect the flow in desirable ways. An adjoint optimization method is described. It is used to determine the blowing distribution in the diverging section of a convergent-divergent nozzle that gives a desired pressure distribution in the nozzle. Both the direct and adjoint problems and their associated boundary conditions are developed. The adjoint method is used to determine the blowing distribution required to minimize the shock strength in the nozzle to achieve a known target pressure and to achieve close to an ideally expanded flow pressure. A multi-block structured solver is developed to calculate the flow solution and associated adjoint variables. Two and three-dimensional calculations are performed for internal and external of the nozzle domains. A two step MacCormack scheme based on predictor- corrector technique is was used for some calculations. The four and five stage Runge-Kutta schemes are also used to artificially march in time. A modified Runge-Kutta scheme is used to accelerate the convergence to a steady state. Second order artificial dissipation has been added to stabilize the calculations. The steepest decent method has been used for the optimization of the blowing velocity after the gradients of the cost function with respect to the blowing velocity are calculated using adjoint method. Several examples are given of the optimization of blowing using the adjoint method.

  6. Adjoint-based sensitivity analysis for reactor safety applications

    SciTech Connect

    Parks, C.V.

    1986-08-01

    The application and usefulness of an adjoint-based methodology for performing sensitivity analysis on reactor safety computer codes is investigated. The adjoint-based methodology, referred to as differential sensitivity theory (DST), provides first-order derivatives of the calculated quantities of interest (responses) with respect to the input parameters. The basic theoretical development of DST is presented along with the needed general extensions for consideration of model discontinuities and a variety of useful response definitions. A simple analytic problem is used to highlight the general DST procedures. finally, DST procedures presented in this work are applied to two highly nonlinear reactor accident analysis codes: (1) FASTGAS, a relatively small code for analysis of a loss-of-decay-heat-removal accident in a gas-cooled fast reactor, and (2) an existing code called VENUS-II which has been employed for analyzing the core disassembly phase of a hypothetical fast reactor accident. The two codes are different both in terms of complexity and in terms of the facets of DST which can be illustrated. Sensitivity results from the adjoint codes ADJGAS and VENUS-ADJ are verified with direct recalcualtions using perturbed input parameters. The effectiveness of the DST results for parameter ranking, prediction of response changes, and uncertainty analysis are illustrated. The conclusion drawn from this study is that DST is a viable, cost-effective methodology for accurate sensitivity analysis. In addition, a useful sensitivity tool for use in the fast reactor safety area has been developed in VENUS-ADJ. Future work needs to concentrate on combining the accurate first-order derivatives/results from DST with existing methods (based solely on direct recalculations) for higher-order response surfaces.

  7. Advances in Global Adjoint Tomography -- Massive Data Assimilation

    NASA Astrophysics Data System (ADS)

    Ruan, Y.; Lei, W.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Krischer, L.; Tromp, J.

    2015-12-01

    Azimuthal anisotropy and anelasticity are key to understanding a myriad of processes in Earth's interior. Resolving these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models and an iterative inversion strategy. In the wake of successes in regional studies(e.g., Chen et al., 2007; Tape et al., 2009, 2010; Fichtner et al., 2009, 2010; Chen et al.,2010; Zhu et al., 2012, 2013; Chen et al., 2015), we are employing adjoint tomography based on a spectral-element method (Komatitsch & Tromp 1999, 2002) on a global scale using the supercomputer ''Titan'' at Oak Ridge National Laboratory. After 15 iterations, we have obtained a high-resolution transversely isotropic Earth model (M15) using traveltime data from 253 earthquakes. To obtain higher resolution images of the emerging new features and to prepare the inversion for azimuthal anisotropy and anelasticity, we expanded the original dataset with approximately 4,220 additional global earthquakes (Mw5.5-7.0) --occurring between 1995 and 2014-- and downloaded 300-minute-long time series for all available data archived at the IRIS Data Management Center, ORFEUS, and F-net. Ocean Bottom Seismograph data from the last decade are also included to maximize data coverage. In order to handle the huge dataset and solve the I/O bottleneck in global adjoint tomography, we implemented a python-based parallel data processing workflow based on the newly developed Adaptable Seismic Data Format (ASDF). With the help of the data selection tool MUSTANG developed by IRIS, we cleaned our dataset and assembled event-based ASDF files for parallel processing. We have started Centroid Moment Tensors (CMT) inversions for all 4,220 earthquakes with the latest model M15, and selected high-quality data for measurement. We will statistically investigate each channel using synthetic seismograms calculated in M15 for updated CMTs and identify problematic channels. In addition to data screening, we also modified

  8. A self-adjoint decomposition of the radial momentum operator

    NASA Astrophysics Data System (ADS)

    Liu, Q. H.; Xiao, S. F.

    2015-12-01

    With acceptance of the Dirac's observation that the canonical quantization entails using Cartesian coordinates, we examine the operator erPr rather than Pr itself and demonstrate that there is a decomposition of erPr into a difference of two self-adjoint but noncommutative operators, in which one is the total momentum and another is the transverse one. This study renders the operator Pr indirectly measurable and physically meaningful, offering an explanation of why the mean value of Pr over a quantum mechanical state makes sense and supporting Dirac's claim that Pr "is real and is the true momentum conjugate to r".

  9. Examining Tropical Cyclone - Kelvin Wave Interactions using Adjoint Diagnostics

    NASA Astrophysics Data System (ADS)

    Reynolds, C. A.; Doyle, J. D.; Hong, X.

    2015-12-01

    Adjoint-based tools can provide valuable insight into the mechanisms that influence the evolution and predictability of atmospheric phenomena, as they allow for the efficient and rigorous computation of forecast sensitivity to changes in the initial state. We apply adjoint-based tools from the non-hydrostatic Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) to explore the initial-state sensitivity and interactions between a tropical cyclone and atmospheric equatorial waves associated with the Madden Julian Oscillation (MJO) in the Indian Ocean during the DYNAMO field campaign. The development of Tropical Cyclone 5 (TC05) coincided with the passage of an equatorial Kelvin wave and westerly wind burst associated with an MJO that developed in the Indian Ocean in late November 2011, but it was unclear if and how one affected the other. COAMPS 24-h and 36-h adjoint sensitivities are analyzed for both TC05 and the equatorial waves to understand how the evolution of each system is sensitive to the other. The sensitivity of equatorial westerlies in the western Indian Ocean on 23 November shares characteristics with the classic Gill (1980) Rossby and Kelvin wave response to symmetric heating about the equator, including symmetric cyclonic circulations to the north and south of the westerlies, and enhanced heating in the area of convergence between the equatorial westerlies and easterlies. In addition, there is sensitivity in the Bay of Bengal associated with the cyclonic circulation that eventually develops into TC05. At the same time, the developing TC05 system shows strongest sensitivity to local wind and heating perturbations, but sensitivity to the equatorial westerlies is also clear. On 24 November, when the Kelvin wave is immediately south of the developing tropical cyclone, both phenomena are sensitive to each other. On 25 November TC05 no longer shows sensitivity to the Kelvin wave, while the Kelvin Wave still exhibits some weak sensitivity to TC05. In

  10. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  11. Comparison of the Monte Carlo adjoint-weighted and differential operator perturbation methods

    SciTech Connect

    Kiedrowski, Brian C; Brown, Forrest B

    2010-01-01

    Two perturbation theory methodologies are implemented for k-eigenvalue calculations in the continuous-energy Monte Carlo code, MCNP6. A comparison of the accuracy of these techniques, the differential operator and adjoint-weighted methods, is performed numerically and analytically. Typically, the adjoint-weighted method shows better performance over a larger range; however, there are exceptions.

  12. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L., Jr.; Konikow, L.F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  13. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  14. Adjoint design sensitivity analysis of reduced atomic systems using generalized Langevin equation for lattice structures

    SciTech Connect

    Kim, Min-Geun; Jang, Hong-Lae; Cho, Seonho

    2013-05-01

    An efficient adjoint design sensitivity analysis method is developed for reduced atomic systems. A reduced atomic system and the adjoint system are constructed in a locally confined region, utilizing generalized Langevin equation (GLE) for periodic lattice structures. Due to the translational symmetry of lattice structures, the size of time history kernel function that accounts for the boundary effects of the reduced atomic systems could be reduced to a single atom’s degrees of freedom. For the problems of highly nonlinear design variables, the finite difference method is impractical for its inefficiency and inaccuracy. However, the adjoint method is very efficient regardless of the number of design variables since one additional time integration is required for the adjoint GLE. Through numerical examples, the derived adjoint sensitivity turns out to be accurate and efficient through the comparison with finite difference sensitivity.

  15. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  16. Limitations of Adjoint-Based Optimization for Separated Flows

    NASA Astrophysics Data System (ADS)

    Otero, J. Javier; Sharma, Ati; Sandberg, Richard

    2015-11-01

    Cabin noise is generated by the transmission of turbulent pressure fluctuations through a vibrating panel and can lead to fatigue. In the present study, we model this problem by using DNS to simulate the flow separating off a backward facing step and interacting with a plate downstream of the step. An adjoint formulation of the full compressible Navier-Stokes equations with varying viscosity is used to calculate the optimal control required to minimize the fluid-structure-acoustic interaction with the plate. To achieve noise reduction, a cost function in wavenumber space is chosen to minimize the excitation of the lower structural modes of the structure. To ensure the validity of time-averaged cost functions, it is essential that the time horizon is long enough to be a representative sample of the statistical behaviour of the flow field. The results from the current study show how this scenario is not always feasible for separated flows, because the chaotic behaviour of turbulence surpasses the ability of adjoint-based methods to compute time-dependent sensitivities of the flow.

  17. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge

  18. A new Green's function Monte Carlo algorithm for the estimation of the derivative of the solution of Helmholtz equation subject to Neumann and mixed boundary conditions

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik

    2016-06-01

    The objective of this paper is the extension and application of a newly-developed Green's function Monte Carlo (GFMC) algorithm to the estimation of the derivative of the solution of the one-dimensional (1D) Helmholtz equation subject to Neumann and mixed boundary conditions problems. The traditional GFMC approach for the solution of partial differential equations subject to these boundary conditions involves "reflecting boundaries" resulting in relatively large computational times. My work, inspired by the work of K.K. Sabelfeld is philosophically different in that there is no requirement for reflection at these boundaries. The underlying feature of this algorithm is the elimination of the use of reflecting boundaries through the use of novel Green's functions that mimic the boundary conditions of the problem of interest. My past work has involved the application of this algorithm to the estimation of the solution of the 1D Laplace equation, the Helmholtz equation and the modified Helmholtz equation. In this work, this algorithm has been adapted to the estimation of the derivative of the solution which is a very important development. In the traditional approach involving reflection, to estimate the derivative at a certain number of points, one has to a priori estimate the solution at a larger number of points. In the case of a one-dimensional problem for instance, to obtain the derivative of the solution at a point, one has to obtain the solution at two points, one on each side of the point of interest. These points have to be close enough so that the validity of the first-order approximation for the derivative operator is justified and at the same time, the actual difference between the solutions at these two points has to be at least an order of magnitude higher than the statistical error in the estimation of the solution, thus requiring a significantly larger number of random-walks than that required for the estimation of the solution. In this new approach

  19. Joint inversion of seismic velocities and source location without rays using the truncated Newton and the adjoint-state method

    NASA Astrophysics Data System (ADS)

    Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.

    2013-12-01

    Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is

  20. Adjoint-state inversion of electric resistivity tomography data of seawater intrusion at the Argentona coastal aquifer (Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián

    2016-04-01

    Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.

  1. A Solution Method of Job-shop Scheduling Problems by the Idle Time Shortening Type Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ida, Kenichi; Osawa, Akira

    In this paper, we propose a new idle time shortening method for Job-shop scheduling problems (JSPs). We insert its method into a genetic algorithm (GA). The purpose of JSP is to find a schedule with the minimum makespan. We suppose that it is effective to reduce idle time of a machine in order to improve the makespan. The left shift is a famous algorithm in existing algorithms for shortening idle time. The left shift can not arrange the work to idle time. For that reason, some idle times are not shortened by the left shift. We propose two kinds of algorithms which shorten such idle time. Next, we combine these algorithms and the reversal of a schedule. We apply GA with its algorithm to benchmark problems and we show its effectiveness.

  2. Mass anomalous dimension in SU(2) with two adjoint fermions

    SciTech Connect

    Bursa, Francis; Del Debbio, Luigi; Keegan, Liam; Pica, Claudio; Pickup, Thomas

    2010-01-01

    We study SU(2) lattice gauge theory with two flavors of Dirac fermions in the adjoint representation. We measure the running of the coupling in the Schroedinger functional scheme and find it is consistent with existing results. We discuss how systematic errors affect the evidence for an infrared fixed point (IRFP). We present the first measurement of the running of the mass in the Schroedinger functional scheme. The anomalous dimension of the chiral condensate, which is relevant for phenomenological applications, can be easily extracted from the running of the mass, under the assumption that the theory has an IRFP. At the current level of accuracy, we can estimate 0.05<{gamma}<0.56 at the IRFP.

  3. Infrared regime of SU(2) with one adjoint Dirac flavor

    NASA Astrophysics Data System (ADS)

    Athenodorou, Andreas; Bennett, Ed; Bergner, Georg; Lucini, Biagio

    2015-06-01

    SU(2) gauge theory with one Dirac flavor in the adjoint representation is investigated on a lattice. Initial results for the gluonic and mesonic spectrum, static potential from Wilson and Polyakov loops, and the anomalous dimension of the fermionic condensate from the Dirac mode number are presented. The results found are not consistent with conventional confining behavior, pointing instead tentatively towards a theory lying within or very near the onset of the conformal window, with the anomalous dimension of the fermionic condensate in the range 0.9 ≲γ*≲0.95 . The implications of our work for building a viable theory of strongly interacting dynamics beyond the standard model are discussed.

  4. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  5. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  6. Higher representations on the lattice: Numerical simulations, SU(2) with adjoint fermions

    SciTech Connect

    Del Debbio, Luigi; Patella, Agostino; Pica, Claudio

    2010-05-01

    We discuss the lattice formulation of gauge theories with fermions in arbitrary representations of the color group and present in detail the implementation of the hybrid Monte Carlo (HMC)/rational HMC algorithm for simulating dynamical fermions. We discuss the validation of the implementation through an extensive set of tests and the stability of simulations by monitoring the distribution of the lowest eigenvalue of the Wilson-Dirac operator. Working with two flavors of Wilson fermions in the adjoint representation, benchmark results for realistic lattice simulations are presented. Runs are performed on different lattice sizes ranging from 4{sup 3}x8 to 24{sup 3}x64 sites. For the two smallest lattices we also report the measured values of benchmark mesonic observables. These results can be used as a baseline for rapid cross-checks of simulations in higher representations. The results presented here are the first steps toward more extensive investigations with controlled systematic errors, aiming at a detailed understanding of the phase structure of these theories, and of their viability as candidates for strong dynamics beyond the standard model.

  7. Admitting the Inadmissible: Adjoint Formulation for Incomplete Cost Functionals in Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Salas, Manuel D.

    1997-01-01

    We derive the adjoint equations for problems in aerodynamic optimization which are improperly considered as "inadmissible." For example, a cost functional which depends on the density, rather than on the pressure, is considered "inadmissible" for an optimization problem governed by the Euler equations. We show that for such problems additional terms should be included in the Lagrangian functional when deriving the adjoint equations. These terms are obtained from the restriction of the interior PDE to the control surface. Demonstrations of the explicit derivation of the adjoint equations for "inadmissible" cost functionals are given for the potential, Euler, and Navier-Stokes equations.

  8. Use of adjoint methods in the probabilistic finite element approach to fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted

    1988-01-01

    The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.

  9. Time domain adjoint sensitivity analysis of electromagnetic problems with nonlinear media.

    PubMed

    Bakr, Mohamed H; Ahmed, Osman S; El Sherif, Mohamed H; Nomura, Tsuyoshi

    2014-05-01

    In this paper, we propose a theory for wideband adjoint sensitivity analysis of problems with nonlinear media. We show that the sensitivities of the desired response with respect to all shape and material parameters are obtained through one extra adjoint simulation. Unlike linear problems, the system matrices of this adjoint simulation are time varying. Their values are determined during the original simulation. The proposed theory exploits the time-domain transmission line modeling (TLM) and provides an efficient AVM approach for sensitivity analysis of general time domain objective functions. The theory has been illustrated through a number of examples. PMID:24921783

  10. Adjoint-Based Methods for Estimating CO2 Sources and Sinks from Atmospheric Concentration Data

    NASA Technical Reports Server (NTRS)

    Andrews, Arlyn E.

    2003-01-01

    Work to develop adjoint-based methods for estimating CO2 sources and sinks from atmospheric concentration data was initiated in preparation for last year's summer institute on Carbon Data Assimilation (CDAS) at the National Center for Atmospheric Research in Boulder, CO. The workshop exercises used the GSFC Parameterized Chemistry and Transport Model and its adjoint. Since the workshop, a number of simulations have been run to evaluate the performance of the model adjoint. Results from these simulations will be presented, along with an outline of challenges associated with incorporating a variety of disparate data sources, from sparse, but highly precise, surface in situ observations to less accurate, global future satellite observations.

  11. On basic conditions to generate multi-adjoint concept lattices via Galois connections

    NASA Astrophysics Data System (ADS)

    Díaz-Moreno, J. C.; Medina, J.; Ojeda-Aciego, M.

    2014-02-01

    This paper introduces sufficient and necessary conditions with respect to the fuzzy operators considered in a multi-adjoint frame under which the standard combinations of multi-adjoint sufficiency, possibility, and necessity operators form (antitone or isotone) Galois connections. The underlying idea is to study the minimal algebraic requirements so that the concept-forming operators (defined using the same syntactical form than the extension and intension operators of multi-adjoint concept lattices) form a Galois connection. As a consequence, given a relational database, we have much more possibilities to construct concept lattices associated with it, so that we can choose the specific version which better suits the situation.

  12. The use of multigrid techniques in the solution of the Elrod algorithm for a dynamically loaded journal bearing. M.S. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.

    1988-01-01

    A numerical solution to a theoretical model of vapor cavitation in a dynamically loaded journal bearing is developed, utilizing a multigrid iterative technique. The code is compared with a presently existing direct solution in terms of computational time and accuracy. The model is based on the Elrod algorithm, a control volume approach to the Reynolds equation which mimics the Jakobssen-Floberg and Olsson cavitation theory. Besides accounting for a moving cavitation boundary and conservation of mass at the boundary, it also conserves mass within the cavitated region via liquid striations. The mixed nature of the equations (elliptic in the full film zone and nonelliptic in the cavitated zone) coupled with the dynamic aspects of the problem create interesting difficulties for the present solution approach. Emphasis is placed on the methods found to eliminate solution instabilities. Excellent results are obtained for both accuracy and reduction of computational time.

  13. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite. PMID:18558530

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  15. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  16. New Factorization Techniques and Parallel (log N) Algorithms for Forward Dynamics Solution of Single Closed-Chain Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper parallel 0(log N) algorithms for dynamic simulation of single closed-chain rigid multibody system as specialized to the case of a robot manipulatoar in contact with the environment are developed.

  17. The Research of Solution to the Problems of Complex Task Scheduling Based on Self-adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Li; He, Yongxiang; Xue, Haidong; Chen, Leichen

    Traditional genetic algorithms (GA) displays a disadvantage of early-constringency in dealing with scheduling problem. To improve the crossover operators and mutation operators self-adaptively, this paper proposes a self-adaptive GA at the target of multitask scheduling optimization under limited resources. The experiment results show that the proposed algorithm outperforms the traditional GA in evolutive ability to deal with complex task scheduling optimization.

  18. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  19. MS S4.03.002 - Adjoint-Based Design for Configuration Shaping

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    This slide presentation discusses a method of inverse design for low sonic boom using adjoint-based gradient computations. It outlines a method for shaping a configuration in order to match a prescribed near-field signature.

  20. A numerical adjoint parabolic equation (PE) method for tomography and geoacoustic inversion in shallow water

    NASA Astrophysics Data System (ADS)

    Hermand, Jean-Pierre; Berrada, Mohamed; Meyer, Matthias; Asch, Mark

    2005-09-01

    Recently, an analytic adjoint-based method of optimal nonlocal boundary control has been proposed for inversion of a waveguide acoustic field using the wide-angle parabolic equation [Meyer and Hermand, J. Acoust. Soc. Am. 117, 2937-2948 (2005)]. In this paper a numerical extension of this approach is presented that allows the direct inversion for the geoacoustic parameters which are embedded in a spectral integral representation of the nonlocal boundary condition. The adjoint model is generated numerically and the inversion is carried out jointly across multiple frequencies. The paper further discusses the application of the numerical adjoint PE method for ocean acoustic tomography. To show the effectiveness of the implemented numerical adjoint, preliminary inversion results of water sound-speed profile and bottom acoustic properties will be shown for the YELLOW SHARK '94 experimental conditions.

  1. Comparison of the adjoint and adjoint-free 4dVar assimilation of the hydrographic and velocity observations in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Yaremchuk, Max; Martin, Paul; Koch, Andrey; Beattie, Christopher

    2016-01-01

    Performance of the adjoint and adjoint-free 4-dimensional variational (4dVar) data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Assimilating the data into the Navy Coastal Ocean Model (NCOM) has shown that both methods deliver similar reduction of the cost function and demonstrate comparable forecast skill at approximately the same computational expense. The obtained optimal states were, however, significantly different in terms of distance from the background state: application of the adjoint method resulted in a 30-40% larger departure, mostly due to the excessive level of ageostrophic motions in the southern basin of the Sea that was not covered by observations.

  2. The Θ-KMS adjoint and time reversed quantum Markov semigroups

    NASA Astrophysics Data System (ADS)

    Bolaños-Servin, Jorge R.; Quezada, Roberto

    2015-08-01

    We introduce the notion of Θ-KMS adjoint of a quantum Markov semigroup, which is identified with the time reversed semigroup. The break of Θ-KMS symmetry, or Θ-standard quantum detailed balance in the sense of Fagnola-Umanità,11 is measured by means of the von Neumann relative entropy of states associated with the semigroup and its Θ-KMS adjoint.

  3. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  4. Global adjoint tomography: Perspectives, initial results and future directions

    NASA Astrophysics Data System (ADS)

    Bozdag, Ebru; Zhu, Hejun; Peter, Daniel; Tromp, Jeroen

    2013-04-01

    Adjoint methods provide an efficient way for incorporating the full nonlinearity of wave propagation and 3D Fréchet kernels in iterative seismic inversions. Our goal is to take adjoint tomography forward to image the entire planet using the opportunities offered by advances in numerical wave propagation solvers and high-performance computing. Using an iterative pre-conditioned conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. Our strategy is to invert crustal and mantle structure together to avoid any bias introduced into upper-mantle images due to "crustal corrections", which are commonly used in classical tomography. We have started with 255 global CMT events (5.8 ≤ Mw ≤ 7) and used GSN stations as well as some local networks such as USArray, European stations, etc. We have demonstrated the feasibility of global scale inversions by performing two iterations based on numerical simulations accurate down to ~27 s. To simplify the problem, we primarily focus on elastic structure, and therefore our measurements are based on multitaper traveltime differences between observed and synthetic seismograms. We compute 3D sensitivity kernels for the selected events combining long-period surface waves (initially T > 60 s), where it is easier to handle nonlinearities due to the crust, with shorter-period body waves (initially T > 27 s), which are more sensitive to deeper parts of the mantle. 3D simulations dramatically increase the usable amount of data so that, with the current earthquake-station setup, we perform each iteration with more than two million measurements. Our initial results are promising to improve images from the upper mantle all the way down to the core-mantle boundary. Recent improvements in our 3D solvers (e.g., a GPU version) and access to high-performance computational centers (e.g., ORNL's Cray XK7 "Titan" system) now enable us to perform iterations

  5. A Generalized Adjoint Approach for Quantifying Reflector Assembly Discontinuity Factor Uncertainties

    SciTech Connect

    Yankov, Artem; Collins, Benjamin; Jessee, Matthew Anderson; Downar, Thomas

    2012-01-01

    Sensitivity-based uncertainty analysis of assembly discontinuity factors (ADFs) can be readily performed using adjoint methods for infinite lattice models. However, there is currently no adjoint-based methodology to obtain uncertainties for ADFs along an interface between a fuel and reflector region. To accommodate leakage effects in a reflector region, a 1D approximation is usually made in order to obtain the homogeneous interface flux required to calculate the ADF. Within this 1D framework an adjoint-based method is proposed that is capable of efficiently calculating ADF uncertainties. In the proposed method the sandwich rule is utilized to relate the covariance of the input parameters of 1D diffusion theory in the reflector region to the covariance of the interface ADFs. The input parameters covariance matrix can be readily obtained using sampling-based codes such as XSUSA or adjoint-based codes such as TSUNAMI. The sensitivity matrix is constructed using a fixed-source adjoint approach for inputs characterizing the reflector region. An analytic approach is then used to determine the sensitivity of the ADFs to fuel parameters using the neutron balance equation. A stochastic approach is used to validate the proposed adjoint-based method.

  6. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  7. Adjoint based data assimilation for phase field model using second order information of a posterior distribution

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Nagao, Hiromichi; Yamanaka, Akinori; Tsukada, Yuhki; Koyama, Toshiyuki; Inoue, Junya

    Phase field (PF) method, which phenomenologically describes dynamics of microstructure evolutions during solidification and phase transformation, has progressed in the fields of hydromechanics and materials engineering. How to determine, based on observation data, an initial state and model parameters involved in a PF model is one of important issues since previous estimation methods require too much computational cost. We propose data assimilation (DA), which enables us to estimate the parameters and states by integrating the PF model and observation data on the basis of the Bayesian statistics. The adjoint method implemented on DA not only finds an optimum solution by maximizing a posterior distribution but also evaluates the uncertainty in the estimations by utilizing the second order information of the posterior distribution. We carried out an estimation test using synthetic data generated by the two-dimensional Kobayashi's PF model. The proposed method is confirmed to reproduce the true initial state and model parameters we assume in advance, and simultaneously estimate their uncertainties due to quality and quantity of the data. This result indicates that the proposed method is capable of suggesting the experimental design to achieve the required accuracy.

  8. Plumes, Hotspot & Slabs Imaged by Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Lefebvre, M. P.; Lei, W.; Peter, D. B.; Smith, J. A.; Komatitsch, D.; Tromp, J.

    2015-12-01

    We present the "first generation" global adjoint tomography model based on 3D wave simulations, which is the result of 15 conjugate-gradient iterations with confined transverse isotropy to the upper mantle. Our starting model is the 3D mantle and crustal models S362ANI (Kustowski et al. 2008) and Crust2.0 (Bassin et al. 2000), respectively. We take into account the full nonlinearity of wave propagation in numerical simulations including attenuation (both in forward and adjoint simulations), topography/bathymetry, etc., using the GPU version of the SPECFEM3D_GLOBE package. We invert for crust and mantle together without crustal corrections to avoid any bias in mantle structure. We started with an initial selection of 253 global CMT events within the magnitude range 5.8 ≤ Mw ≤ 7.0 with numerical simulations having resolution down to 27 s combining 30-s body and 60-s surface waves. After the 12th iteration we increased the resolution to 17 s, including higher-frequency body waves as well as going down to 45 s in surface-wave measurements. We run 180-min seismograms and assimilate all minor- and major-arc body and surface waves. Our 15th iteration model update shows a tantalisingly enhanced image of the Tahiti plume as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone, Erebus, etc. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the initial model. Point-spread function tests (Fichtner & Trampert 2011) suggest that we are close to the resolution of continental-scale studies in our global inversions and able to confidently map features, for instance, at the scale of the Yellowstone hotspot. This is a clear consequence of our multi-scale smoothing strategy, in which we define our smoothing operator as a function of the approximate Hessian kernel and smooth our gradients less wherever we have good ray coverage

  9. Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.

    2013-12-01

    The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow

  10. Iterative algorithm to compute the maximal and stabilising solutions of a general class of discrete-time Riccati-type equations

    NASA Astrophysics Data System (ADS)

    Dragan, Vasile; Morozan, Toader; Stoica, Adrian-Mihail

    2010-04-01

    In this article an iterative method to compute the maximal solution and the stabilising solution, respectively, of a wide class of discrete-time nonlinear equations on the linear space of symmetric matrices is proposed. The class of discrete-time nonlinear equations under consideration contains, as special cases, different types of discrete-time Riccati equations involved in various control problems for discrete-time stochastic systems. This article may be viewed as an addendum of the work of Dragan and Morozan (Dragan, V. and Morozan, T. (2009), 'A Class of Discrete Time Generalized Riccati Equations', Journal of Difference Equations and Applications, first published on 11 December 2009 (iFirst), doi: 10.1080/10236190802389381) where necessary and sufficient conditions for the existence of the maximal solution and stabilising solution of this kind of discrete-time nonlinear equations are given. The aim of this article is to provide a procedure for numerical computation of the maximal solution and the stabilising solution, respectively, simpler than the method based on the Newton-Kantorovich algorithm.

  11. Adjoint sensitivity analysis of hydrodynamic stability in cyclonic flows

    NASA Astrophysics Data System (ADS)

    Guzman Inigo, Juan; Juniper, Matthew

    2015-11-01

    Cyclonic separators are used in a variety of industries to efficiently separate mixtures of fluid and solid phases by means of centrifugal forces and gravity. In certain circumstances, the vortex core of cyclonic flows is known to precess due to the instability of the flow, which leads to performance reductions. We aim to characterize the unsteadiness using linear stability analysis of the Reynolds Averaged Navier-Stokes (RANS) equations in a global framework. The system of equations, including the turbulence model, is linearised to obtain an eigenvalue problem. Unstable modes corresponding to the dynamics of the large structures of the turbulent flow are extracted. The analysis shows that the most unstable mode is a helical motion which develops around the axis of the flow. This result is in good agreement with LES and experimental analysis, suggesting the validity of the approach. Finally, an adjoint-based sensitivity analysis is performed to determine the regions of the flow that, when altered, have most influence on the frequency and growth-rate of the unstable eigenvalues.

  12. Determining scaling laws from geodynamic simulations using adjoint gradients.

    NASA Astrophysics Data System (ADS)

    Reuber, Georg; Kaus, Boris; Popov, Anton

    2016-04-01

    Whereas significant progress has been made in modelling of lithospheric and crustal scale processes in recent years, it often remains a challenge to understand which of the many model parameters is of key importance for a particular simulation. Determining this is usually done by manually changing the model input parameters and performing new simulations. For a few cases, such as for crustal-scale folding instabilities (with viscous rheologies, e.g. [1]) or for Rayleigh-Taylor instabilities, one can use existing scaling laws to obtain such insights. Yet, for a more general case, it is not straightforward to do this (apart from running many simulations). Here, we test a different approach which computes gradients of the model parameters using adjoint based methods, which has the advantage that we can test the influence of an independent number of parameters on the system by computing and analysing the covariance matrix and the gradient of the parameter space. This method might give us the chance to get insights on which parameters affect for example subduction processes and how strong the system depends on their influence. [1] Fernandez, N., & Kaus, B. J. (2014). Fold interaction and wavelength selection in 3D models of multilayer detachment folding. Tectonophysics, 632, 199-217.

  13. Conformal versus confining scenario in SU(2) with adjoint fermions

    SciTech Connect

    Del Debbio, L.; Pica, C.; Lucini, B.; Patella, A.; Rago, A.

    2009-10-01

    The masses of the lowest-lying states in the meson and in the gluonic sector of an SU(2) gauge theory with two Dirac flavors in the adjoint representation are measured on the lattice at a fixed value of the lattice coupling {beta}=4/g{sub 0}{sup 2}=2.25 for values of the bare fermion mass m{sub 0} that span a range between the quenched regime and the massless limit, and for various lattice volumes. Even for light constituent fermions the lightest glueballs are found to be lighter than the lightest mesons. Moreover, the string tension between two static fundamental sources strongly depends on the mass of the dynamical fermions and becomes of the order of the inverse squared lattice linear size before the chiral limit is reached. The implications of these findings for the phase of the theory in the massless limit are discussed and a strategy for discriminating between the (near-)conformal and the confining scenario is outlined.

  14. A pseudo-spectral algorithm and test cases for the numerical solution of the two-dimensional rotating Green-Naghdi shallow water equations

    NASA Astrophysics Data System (ADS)

    Pearce, J. D.; Esler, J. G.

    2010-10-01

    A pseudo-spectral algorithm is presented for the solution of the rotating Green-Naghdi shallow water equations in two spatial dimensions. The equations are first written in vorticity-divergence form, in order to exploit the fact that time-derivatives then appear implicitly in the divergence equation only. A nonlinear equation must then be solved at each time-step in order to determine the divergence tendency. The nonlinear equation is solved by means of a simultaneous iteration in spectral space to determine each Fourier component. The key to the rapid convergence of the iteration is the use of a good initial guess for the divergence tendency, which is obtained from polynomial extrapolation of the solution obtained at previous time-levels. The algorithm is therefore best suited to be used with a standard multi-step time-stepping scheme (e.g. leap-frog). Two test cases are presented to validate the algorithm for initial value problems on a square periodic domain. The first test is to verify cnoidal wave speeds in one-dimension against analytical results. The second test is to ensure that the Miles-Salmon potential vorticity is advected as a parcel-wise conserved tracer throughout the nonlinear evolution of a perturbed jet subject to shear instability. The algorithm is demonstrated to perform well in each test. The resulting numerical model is expected to be of use in identifying paradigmatic behavior in mesoscale flows in the atmosphere and ocean in which both vortical, nonlinear and dispersive effects are important.

  15. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  16. The application of the gradient-based adjoint multi-point optimization of single and double shock control bumps for transonic airfoils

    NASA Astrophysics Data System (ADS)

    Mazaheri, K.; Nejati, A.; Chaharlang Kiani, K.; Taheri, R.

    2015-08-01

    A shock control bump (SCB) is a flow control method which uses local small deformations in a flexible wing surface to considerably reduce the strength of shock waves and the resulting wave drag in transonic flows. Most of the reported research is devoted to optimization in a single flow condition. Here, we have used a multi-point adjoint optimization scheme to optimize shape and location of the SCB. Practically, this introduces transonic airfoils equipped with the SCB which are simultaneously optimized for different off-design transonic flight conditions. Here, we use this optimization algorithm to enhance and optimize the performance of SCBs in two benchmark airfoils, i.e., RAE-2822 and NACA-64A010, over a wide range of off-design Mach numbers. All results are compared with the usual single-point optimization. We use numerical simulation of the turbulent viscous flow and a gradient-based adjoint algorithm to find the optimum location and shape of the SCB. We show that the application of SCBs may increase the aerodynamic performance of an RAE-2822 airfoil by 21.9 and by 22.8 % for a NACA-64A010 airfoil compared to the no-bump design in a particular flight condition. We have also investigated the simultaneous usage of two bumps for the upper and the lower surfaces of the airfoil. This has resulted in a 26.1 % improvement for the RAE-2822 compared to the clean airfoil in one flight condition.

  17. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    SciTech Connect

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  18. Adjoint-based airfoil shape optimization in transonic flow

    NASA Astrophysics Data System (ADS)

    Gramanzini, Joe-Ray

    The primary focus of this work is efficient aerodynamic shape optimization in transonic flow. Adjoint-based optimization techniques are employed on airfoil sections and evaluated in terms of computational accuracy as well as efficiency. This study examines two test cases proposed by the AIAA Aerodynamic Design Optimization Discussion Group. The first is a two-dimensional, transonic, inviscid, non-lifting optimization of a Modified-NACA 0012 airfoil. The second is a two-dimensional, transonic, viscous optimization problem using a RAE 2822 airfoil. The FUN3D CFD code of NASA Langley Research Center is used as the ow solver for the gradient-based optimization cases. Two shape parameterization techniques are employed to study their effect and the number of design variables on the final optimized shape: Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD) and the BandAids free-form deformation technique. For the two airfoil cases, angle of attack is treated as a global design variable. The thickness and camber distributions are the local design variables for MASSOUD, and selected airfoil surface grid points are the local design variables for BandAids. Using the MASSOUD technique, a drag reduction of 72.14% is achieved for the NACA 0012 case, reducing the total number of drag counts from 473.91 to 130.59. Employing the BandAids technique yields a 78.67% drag reduction, from 473.91 to 99.98. The RAE 2822 case exhibited a drag reduction from 217.79 to 132.79 counts, a 39.05% decrease using BandAids.

  19. Adjoint sensitivity analysis for a three-dimensional photochemical model: implementation and method comparison.

    PubMed

    Martien, Philip T; Harley, Robert A; Cacuci, Dan G

    2006-04-15

    Photochemical air pollution forms when emissions of nitrogen oxides (NO(x)) and volatile organic compounds (VOC) react in the atmosphere in the presence of sunlight. The goal of applying three-dimensional photochemical air quality models is usually to conduct sensitivity analysis: for example, to predict changes in an ozone response due to changes in NO(x) and VOC emissions or other model data. Forward sensitivity analysis methods are best suited to investigating sensitivities of many model responses to changes in a few inputs or parameters. Here we develop a continuous adjoint model and demonstrate an adjoint sensitivity analysis procedure that is well-suited to the complementary case of determining sensitivity of a small number of model responses to many parameters. Sensitivities generated using the adjoint method agree with those generated using other methods. Compared to the forward method, the adjoint method had large disk storage requirements but was more efficient in terms of computer processor time for receptor-based investigations focused on a single response at a specified site and time. The adjoint method also generates sensitivity apportionment fields, which reveal when and where model data are important to the target response. PMID:16683606

  20. Generalized adjoint consistent treatment of wall boundary conditions for compressible flows

    NASA Astrophysics Data System (ADS)

    Hartmann, Ralf; Leicht, Tobias

    2015-11-01

    In this article, we revisit the adjoint consistency analysis of Discontinuous Galerkin discretizations of the compressible Euler and Navier-Stokes equations with application to the Reynolds-averaged Navier-Stokes and k- ω turbulence equations. Here, particular emphasis is laid on the discretization of wall boundary conditions. While previously only one specific combination of discretizations of wall boundary conditions and of aerodynamic force coefficients has been shown to give an adjoint consistent discretization, in this article we generalize this analysis and provide a discretization of the force coefficients for any consistent discretization of wall boundary conditions. Furthermore, we demonstrate that a related evaluation of the cp- and cf-distributions is required. The freedom gained in choosing the discretization of boundary conditions without loosing adjoint consistency is used to devise a new adjoint consistent discretization including numerical fluxes on the wall boundary which is more robust than the adjoint consistent discretization known up to now. While this work is presented in the framework of Discontinuous Galerkin discretizations, the insight gained is also applicable to (and thus valuable for) other discretization schemes. In particular, the discretization of integral quantities, like the drag, lift and moment coefficients, as well as the discretization of local quantities at the wall like surface pressure and skin friction should follow as closely as possible the discretization of the flow equations and boundary conditions at the wall boundary.

  1. Self-adjoint Operators as Functions I. Lattices, Galois Connections, and the Spectral Order

    NASA Astrophysics Data System (ADS)

    Döring, Andreas; Dewitt, Barry

    2014-06-01

    Observables of a quantum system, described by self-adjoint operators in a von Neumann algebra or affiliated with it in the unbounded case, form a conditionally complete lattice when equipped with the spectral order. Using this order-theoretic structure, we develop a new perspective on quantum observables. In this first paper (of two), we show that self-adjoint operators affiliated with a von Neumann algebra can equivalently be described as certain real-valued functions on the projection lattice of the algebra, which we call q-observable functions. Bounded self-adjoint operators correspond to q-observable functions with compact image on non-zero projections. These functions, originally defined in a similar form by de Groote (Observables II: quantum observables, 2005), are most naturally seen as adjoints (in the categorical sense) of spectral families. We show how they relate to the daseinisation mapping from the topos approach to quantum theory (Döring and Isham , New Structures for Physics, Springer, Heidelberg, 2011). Moreover, the q-observable functions form a conditionally complete lattice which is shown to be order-isomorphic to the lattice of self-adjoint operators with respect to the spectral order. In a subsequent paper (Döring and Dewitt, 2012, preprint), we will give an interpretation of q-observable functions in terms of quantum probability theory, and using results from the topos approach to quantum theory, we will provide a joint sample space for all quantum observables.

  2. Assessing the Impact of Observations on Numerical Weather Forecasts Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2012-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. This talk provides a general overview of the adjoint method, including the theoretical basis and practical implementation of the technique. Results are presented from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. When performed in conjunction with standard observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies may be important for optimizing the use of the current observational network and defining requirements for future observing systems

  3. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  4. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  5. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    NASA Technical Reports Server (NTRS)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  6. Instantons and the 5D U(1) gauge theory with extra adjoint

    NASA Astrophysics Data System (ADS)

    Poghossian, Rubik; Samsonyan, Marine

    2009-07-01

    In this paper, we compute the partition function of 5D supersymmetric U(1) gauge theory with extra adjoint matter in general Ω background. It is well known that such partition functions encode very rich topological information. We show in particular that unlike the case with no extra matter, the partition function with extra adjoint at some special values of the parameters directly reproduces the generating function for the Poincare polynomial of the moduli space of instantons. We compare our results with those recently obtained by Iqbal et al (Refined topological vertex, cylindric partitions and the U(1) adjoint theory, arXiv:0803.2260), who used the so-called refined topological vertex method.

  7. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  8. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  9. Tracking influential haze source areas in North China using an adjoint model, GRAPES-CUACE

    NASA Astrophysics Data System (ADS)

    An, X. Q.; Zhai, S. X.; Jin, M.; Gong, S. L.; Wang, Y.

    2015-08-01

    Based upon the adjoint theory, the adjoint of the aerosol module in the atmospheric chemical modeling system GRAPES-CUACE (Global/Regional Assimilation and PrEdiction System coupled with the CMA Unified Atmospheric Chemistry Environment) was developed and tested for its correctness. Through statistic comparison, BC (black carbon aerosol) concentrations simulated by GRAPES-CUACE were generally consistent with observations from Nanjiao (one urban observation station) and Shangdianzi (one rural observation station) stations. To track the most influential emission-sources regions and the most influential time intervals for the high BC concentration during the simulation period, the adjoint model was adopted to simulate the sensitivity of average BC concentration over Beijing at the highest concentration time point (referred to as the Objective Function) with respect to BC emission amount over Beijing-Tianjin-Hebei region. Four types of regions were selected based on administrative division and sensitivity coefficient distribution. The adjoint model was used to quantify the effects of emission-sources reduction in different time intervals over different regions by one independent simulation. Effects of different emission reduction strategies based on adjoint sensitivity information show that the more influential regions (regions with relatively larger sensitivity coefficients) do not necessarily correspond to the administrative regions, and the influence effectiveness of sensitivity-oriented regions was greater than the administrative divisions. The influence of emissions on the objective function decreases sharply approximately for the pollutants emitted 17-18 h ago in this episode. Therefore, controlling critical emission regions during critical time intervals on the basis of adjoint sensitivity analysis is much more efficient than controlling administrative specified regions during an experiential time period.

  10. CMT Source Inversions for Massive Data Assimilation in Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lei, W.; Ruan, Y.; Bozdag, E.; Lefebvre, M. P.; Smith, J. A.; Modrak, R. T.; Komatitsch, D.; Song, X.; Liu, Q.; Tromp, J.; Peter, D. B.

    2015-12-01

    Full Waveform Inversion (FWI) is a vital tool for probing the Earth's interior and enhancing our knowledge of the underlying dynamical processes [e.g., Liu et al., 2012]. Using the adjoint tomography method, we have successfully obtained a first-generation global FWI model named M15 [Bozdag et al., 2015]. To achieve higher resolution of the emerging new structural features and to accommodate azimuthal anisotropy and anelasticity in the next-generation model, we expanded our database from 256 to 4,224 earthquakes. Previous studies have shown that ray-theory-based Centroid Moment Tensor (CMT) inversion algorithms can produce systematic biases in earthquake source parameters due to tradeoffs with 3D crustal and mantle heterogeneity [e.g., Hjorleifsdottir et al., 2010]. To reduce these well-known tradeoffs, we performed CMT inversions in our current 3D global model before resuming the structural inversion with the expanded database. Initial source parameters are selected from the global CMT database [Ekstrom et al., 2012], with moment magnitudes ranging from 5.5 to 7.0 and occurring between 1994 and 2015. Data from global and regional networks were retrieved from the IRIS DMC. Synthetic seismograms were generated based on the spectral-element-based seismic wave propagation solver (SPECFEM3D GLOBE) in model M15. We used a source inversion algorithm based on a waveform misfit function while allowing time shifts between data and synthetics to accommodate additional unmodeled 3D heterogeneity [Liu et al., 2004]. To accommodate the large number of earthquakes and time series (more than 10,000,000 records), we implemented a source inversion workflow based on the newly developed Adaptive Seismic Data Format (ASDF) [Krischer, Smith, et al., 2015] and ObsPy [Krischer et al., 2015]. In ASDF, each earthquake is associated with a single file, thereby eliminating I/O bottlenecks in the workflow and facilitating fast parallel processing. Our preliminary results indicate that errors

  11. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods (13, 12, 44, 38). The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method (19, 20, 21, 23, 39, 25, 40, 41, 42, 43, 9) was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations (39, 25). In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that the basic methodology could be ported to distributed memory parallel computing architectures [241. In this paper, our concem will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  12. Aerodynamic Shape Optimization of Supersonic Aircraft Configurations via an Adjoint Formulation on Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Alonso, Juan Jose; Rimlinger, Mark J.; Jameson, Antony

    1996-01-01

    This work describes the application of a control theory-based aerodynamic shape optimization method to the problem of supersonic aircraft design. The design process is greatly accelerated through the use of both control theory and a parallel implementation on distributed memory computers. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is then implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) Standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on higher order computational fluid dynamics methods (CFD). In our earlier studies, the serial implementation of this design method was shown to be effective for the optimization of airfoils, wings, wing-bodies, and complex aircraft configurations using both the potential equation and the Euler equations. In our most recent paper, the Euler method was extended to treat complete aircraft configurations via a new multiblock implementation. Furthermore, during the same conference, we also presented preliminary results demonstrating that this basic methodology could be ported to distributed memory parallel computing architectures. In this paper, our concern will be to demonstrate that the combined power of these new technologies can be used routinely in an industrial design environment by applying it to the case study of the design of typical supersonic transport configurations. A particular difficulty of this test case is posed by the propulsion/airframe integration.

  13. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  14. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  15. Stratospheric Water Vapor and the Asian Monsoon: An Adjoint Model Investigation

    NASA Technical Reports Server (NTRS)

    Olsen, Mark A.; Andrews, Arlyn E.

    2003-01-01

    A new adjoint model of the Goddard Parameterized Chemistry and Transport Model is used to investigate the role that the Asian monsoon plays in transporting water to the stratosphere. The adjoint model provides a unique perspective compared to non-diffusive and non-mixing Lagrangian trajectory analysis. The quantity of water vapor transported from the monsoon and the pathways into the stratosphere are examined. The emphasis is on the amount of water originating from the monsoon that contributes to the tropical tape recorder signal. The cross-tropopause flux of water from the monsoon to the midlatitude lower stratosphere will also be discussed.

  16. Adjoint sensitivity studies of loop current and eddy shedding in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Ganesh; Cornuelle, Bruce D.; Hoteit, Ibrahim

    2013-07-01

    Adjoint model sensitivity analyses were applied for the loop current (LC) and its eddy shedding in the Gulf of Mexico (GoM) using the MIT general circulation model (MITgcm). The circulation in the GoM is mainly driven by the energetic LC and subsequent LC eddy separation. In order to understand which ocean regions and features control the evolution of the LC, including anticyclonic warm-core eddy shedding in the GoM, forward and adjoint sensitivities with respect to previous model state and atmospheric forcing were computed using the MITgcm and its adjoint. Since the validity of the adjoint model sensitivities depends on the capability of the forward model to simulate the real LC system and the eddy shedding processes, a 5 year (2004-2008) forward model simulation was performed for the GoM using realistic atmospheric forcing, initial, and boundary conditions. This forward model simulation was compared to satellite measurements of sea-surface height (SSH) and sea-surface temperature (SST), and observed transport variability. Despite realistic mean state, standard deviations, and LC eddy shedding period, the simulated LC extension shows less variability and more regularity than the observations. However, the model is suitable for studying the LC system and can be utilized for examining the ocean influences leading to a simple, and hopefully generic LC eddy separation in the GoM. The adjoint sensitivities of the LC show influences from the Yucatan Channel (YC) flow and Loop Current Frontal Eddy (LCFE) on both LC extension and eddy separation, as suggested by earlier work. Some of the processes that control LC extension after eddy separation differ from those controlling eddy shedding, but include YC through-flow. The sensitivity remains stable for more than 30 days and moves generally upstream, entering the Caribbean Sea. The sensitivities of the LC for SST generally remain closer to the surface and move at speeds consistent with advection by the high-speed core of

  17. Global Adjoint Tomography: Combining Big Data with HPC Simulations

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Lefebvre, M. P.; Lei, W.; Peter, D. B.; Smith, J. A.; Komatitsch, D.; Tromp, J.

    2014-12-01

    The steady increase in data quality and the number of global seismographic stations have substantially grown the amount of data available for construction of Earth models. Meanwhile, developments in the theory of wave propagation, numerical methods and HPC systems have enabled unprecedented simulations of seismic wave propagation in realistic 3D Earth models which lead the extraction of more information from data, ultimately culminating in the use of entire three-component seismograms.Our aim is to take adjoint tomography further to image the entire planet which is one of the extreme cases in seismology due to its intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated in inversions. We have started low resolution (T > 27 s, soon will be > 17 s) global inversions with 253 earthquakes for a transversely isotropic crust and mantle model on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D solvers, such as the GPU version of the SPECFEM3D_GLOBE package, will allow us perform higher-resolution (T > 9 s) and longer-duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves to improve imbalanced ray coverage as a result of uneven distribution of sources and receivers on the globe. Our initial results after 10 iterations already indicate several prominent features reported in high-resolution continental studies, such as major slabs (Hellenic, Japan, Bismarck, Sandwich, etc.) and enhancement in plume structures (the Pacific superplume, the Hawaii hot spot, etc.). Our ultimate goal is to assimilate seismic data from more than 6,000 earthquakes within the magnitude range 5.5 ≤ Mw ≤ 7.0. To take full advantage of this data set on ORNL's computational resources, we need a solid framework for managing big data sets during pre-processing (e.g., data requests and quality checks), gradient calculations, and post-processing (e

  18. A two-dimensional, finite-element, flux-corrected transport algorithm for the solution of gas discharge problems

    NASA Astrophysics Data System (ADS)

    Georghiou, G. E.; Morrow, R.; Metaxas, A. C.

    2000-10-01

    An improved finite-element flux-corrected transport (FE-FCT) scheme, which was demonstrated in one dimension by the authors, is now extended to two dimensions and applied to gas discharge problems. The low-order positive ripple-free scheme, required to produce a FCT algorithm, is obtained by introducing diffusion to the high-order scheme (two-step Taylor-Galerkin). A self-adjusting variable diffusion coefficient is introduced, which reduces the high-order scheme to the equivalent of the upwind difference scheme, but without the complexities of an upwind scheme in a finite-element setting. Results are presented which show that the high-order scheme reduces to the equivalent of upwinding when the new diffusion coefficient is used. The proposed FCT scheme is shown to give similar results in comparison to a finite-difference time-split FCT code developed by Boris and Book. Finally, the new method is applied for the first time to a streamer propagation problem in its two-dimensional form.

  19. Quantifying the temporal and spatial sensitivity of forecast storm surge to initial and boundary forcing using the novel technique of adjoint modelling

    NASA Astrophysics Data System (ADS)

    Wilson, C.; Horsburgh, K. J.; Williams, J. A.

    2012-04-01

    Adjoint modelling has only been adopted in atmospheric and large-scale ocean modelling within the last few years. For the first time this study applies it to tide-surge modelling in the coastal region to gain new insight into dynamics and predictability. In order to improve the skill of storm surge forecasts, one needs to understand how uncertainty propagates through the dynamical system from its boundary conditions, through physical parameterisations and how it is modified by the system dynamics. Uncertainty can come from many sources. Here, we aim to determine the sensitivity of forecast coastal sea level in a tide-surge model to infinitesimal perturbations of two such sources: wind stress and bottom drag. We aim to answer two key questions: 1. What are the relative roles of uncertainties in wind stress and bottom drag in causing changes in forecast coastal sea level? 2. For such changes, what are the temporal and spatial scales over which the sensitivities are largest? The existing approach to answer these questions is to use forward ensemble experiments to explore the propagation of uncertainty due to small perturbations of each of the parameters at several locations and at several times. However, to cover all parameters, spatial and temporal scales would require an ensemble with many thousands or millions of members in order to produce sensitivity maps and time-series and may still not capture all the sensitivity due to gaps in the choice of perturbation directions. We apply a new technique, adjoint modelling, which can produce sensitivity information with a single model integration. The adjoint of a tide-surge model based on MITgcm is used to examine coastal storm surge sensitivities to wind stress and bottom drag for an extreme event on the northwest European continental shelf on 9th November 2007. The forward model is first validated against the UK operational tide-surge model and observations, and then the adjoint is constructed using algorithmic automatic

  20. A time efficient finite differences algorithm for the solution of the meridional flow in turbo compressor impellers

    NASA Astrophysics Data System (ADS)

    Reitman, L.; Wolfshtein, M.; Adler, D.

    1982-11-01

    A finite difference method is developed for solving the non-viscous formulation of a three-dimensional compressible flow problem for turbomachinery impellers. The numerical results and the time efficiency of this method are compared to that provided by a finite element method for this problem. The finite difference method utilizes a numerical, curvilinear, and non-orthogonal coordinate transformation and the ADI scheme. The finite difference method is utilized to solve a test problem of a centrifugal compressor impeller. It is shown that the finite difference method produces results in good agreement with the experimentally determined flow fields and is as accurate as the finite element technique. However, the finite difference method only requires about half the time in order to obtain the solution for this problem as that required by the finite element method.

  1. A spectral tau algorithm based on Jacobi operational matrix for numerical solution of time fractional diffusion-wave equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Doha, E. H.; Baleanu, D.; Ezz-Eldien, S. S.

    2015-07-01

    In this paper, an efficient and accurate spectral numerical method is presented for solving second-, fourth-order fractional diffusion-wave equations and fractional wave equations with damping. The proposed method is based on Jacobi tau spectral procedure together with the Jacobi operational matrix for fractional integrals, described in the Riemann-Liouville sense. The main characteristic behind this approach is to reduce such problems to those of solving systems of algebraic equations in the unknown expansion coefficients of the sought-for spectral approximations. The validity and effectiveness of the method are demonstrated by solving five numerical examples. Numerical examples are presented in the form of tables and graphs to make comparisons with the results obtained by other methods and with the exact solutions more easier.

  2. In Silico Calculation of Infinite Dilution Activity Coefficients of Molecular Solutes in Ionic Liquids: Critical Review of Current Methods and New Models Based on Three Machine Learning Algorithms.

    PubMed

    Paduszyński, Kamil

    2016-08-22

    The aim of the paper is to address all the disadvantages of currently available models for calculating infinite dilution activity coefficients (γ(∞)) of molecular solutes in ionic liquids (ILs)-a relevant property from the point of view of many applications of ILs, particularly in separations. Three new models are proposed, each of them based on distinct machine learning algorithm: stepwise multiple linear regression (SWMLR), feed-forward artificial neural network (FFANN), and least-squares support vector machine (LSSVM). The models were established based on the most comprehensive γ(∞) data bank reported so far (>34 000 data points for 188 ILs and 128 solutes). Following the paper published previously [J. Chem. Inf. Model 2014, 54, 1311-1324], the ILs were treated in terms of group contributions, whereas the Abraham solvation parameters were used to quantify an impact of solute structure. Temperature is also included in the input data of the models so that they can be utilized to obtain temperature-dependent data and thus related thermodynamic functions. Both internal and external validation techniques were applied to assess the statistical significance and explanatory power of the final correlations. A comparative study of the overall performance of the investigated SWMLR/FFANN/LSSVM approaches is presented in terms of root-mean-square error and average absolute relative deviation between calculated and experimental γ(∞), evaluated for different families of ILs and solutes, as well as between calculated and experimental infinite dilution selectivity for separation problems benzene from n-hexane and thiophene from n-heptane. LSSVM is shown to be a method with the lowest values of both training and generalization errors. It is finally demonstrated that the established models exhibit an improved accuracy compared to the state-of-the-art model, namely, temperature-dependent group contribution linear solvation energy relationship, published in 2011 [J. Chem

  3. GEN-HELS: Improving the efficiency of the CRAFT acoustic holography algorithm via an alternative approach to formulation of the complete general solution to the Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Chapman, Alexander Lloyd

    Recently, a sound source identification technique called CRAFT was developed as an advance in the state of the art in inverse noise problems. It addressed some limitations associated with nearfield acoustic holography and a few of the issues with inverse boundary element method. This work centers on two critical issues associated with the CRAFT algorithm. Although CRAFT employs the complete general solution associated with the Helmholtz equation, the approach taken to derive those equations results in computational inefficiency when implemented numerically. In this work, a mathematical approach to derivation of the basis equations results in a doubling in efficiency. This formulation of CRAFT is termed general Helmholtz equation, least-squares method (GEN-HELS). Additionally, the numerous singular points present in the gradient of the basis functions are shown here to resolve to finite limits. As a realistic test case, a diesel engine surface pressure and velocity are reconstructed to show the increase in efficiency from CRAFT to GEN-HELS. Keywords: Inverse Numerical Acoustics, Acoustic Holography, Helmholtz Equation, HELS Method, CRAFT Algorithm.

  4. Between algorithm and model: different Molecular Surface definitions for the Poisson-Boltzmann based electrostatic characterization of biomolecules in solution.

    PubMed

    Decherchi, Sergio; Colmenares, José; Catalano, Chiara Eva; Spagnuolo, Michela; Alexov, Emil; Rocchia, Walter

    2013-01-01

    The definition of a molecular surface which is physically sound and computationally efficient is a very interesting and long standing problem in the implicit solvent continuum modeling of biomolecular systems as well as in the molecular graphics field. In this work, two molecular surfaces are evaluated with respect to their suitability for electrostatic computation as alternatives to the widely used Connolly-Richards surface: the blobby surface, an implicit Gaussian atom centered surface, and the skin surface. As figures of merit, we considered surface differentiability and surface area continuity with respect to atom positions, and the agreement with explicit solvent simulations. Geometric analysis seems to privilege the skin to the blobby surface, and points to an unexpected relationship between the non connectedness of the surface, caused by interstices in the solute volume, and the surface area dependence on atomic centers. In order to assess the ability to reproduce explicit solvent results, specific software tools have been developed to enable the use of the skin surface in Poisson-Boltzmann calculations with the DelPhi solver. Results indicate that the skin and Connolly surfaces have a comparable performance from this last point of view. PMID:23519863

  5. Estimation of ex-core detector responses by adjoint Monte Carlo

    SciTech Connect

    Hoogenboom, J. E.

    2006-07-01

    Ex-core detector responses can be efficiently calculated by combining an adjoint Monte Carlo calculation with the converged source distribution of a forward Monte Carlo calculation. As the fission source distribution from a Monte Carlo calculation is given only as a collection of discrete space positions, the coupling requires a point flux estimator for each collision in the adjoint calculation. To avoid the infinite variance problems of the point flux estimator, a next-event finite-variance point flux estimator has been applied, witch is an energy dependent form for heterogeneous media of a finite-variance estimator known from the literature. To test the effects of this combined adjoint-forward calculation a simple geometry of a homogeneous core with a reflector was adopted with a small detector in the reflector. To demonstrate the potential of the method the continuous-energy adjoint Monte Carlo technique with anisotropic scattering was implemented with energy dependent absorption and fission cross sections and constant scattering cross section. A gain in efficiency over a completely forward calculation of the detector response was obtained, which is strongly dependent on the specific system and especially the size and position of the ex-core detector and the energy range considered. Further improvements are possible. The method works without problems for small detectors, even for a point detector and a small or even zero energy range. (authors)

  6. Eulerian-Lagrangian localized adjoint methods for reactive transport in groundwater

    SciTech Connect

    Ewing, R.E.; Wang, Hong

    1996-12-31

    In this paper, we present Eulerian-Lagrangian localized adjoint methods (ELLAM) to solve convection-diffusion-reaction equations governing contaminant transport in groundwater flowing through an adsorbing porous medium. These ELLAM schemes can treat various combinations of boundary conditions and conserve mass. Numerical results are presented to demonstrate the strong potential of ELLAM schemes.

  7. Sensitivity analysis of a model of CO2 exchange in tundra ecosystems by the adjoint method

    NASA Technical Reports Server (NTRS)

    Waelbroek, C.; Louis, J.-F.

    1995-01-01

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO2 flux to perturbation in initial conditions, climatic inputs, and model's main parameters describing current seasonal CO2 exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO2 flux is most sensitive to parameters characterizing litter chemical composition and more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO2 exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO2-induced warming is a significant increase in CO2 emission, creating a positive feedback to atmosphreic CO2 accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO2, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO2 flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results.

  8. Analysis of Correlated Coupling of Monte Carlo Forward and Adjoint Histories

    SciTech Connect

    Ueki, Taro; Hoogenboom, J.E.; Kloosterman, J. L.

    2001-02-15

    In Monte Carlo correlated coupling, forward and adjoint particle histories are initiated in exactly opposite directions at an arbitrarily placed surface between a physical source and a physical detector. It is shown that this coupling calculation can become more efficient than standard forward calculations. In many cases, the basic form of correlated coupling is less efficient than standard forward calculations. This inherent inefficiency can be overcome by applying a black absorber perturbation to either the forward or the adjoint problem and by processing the product of batch averages as one statistical entity. The usage of the black absorber is based on the invariance of the response flow integral with a material perturbation in either the physical detector side volume in the forward problem or the physical source side volume in the adjoint problem. The batch-average product processing makes use of a quadratic increase of the nonzero coupled-score probability. All the developments have been done in such a way that improved efficiency schemes available in widely distributed Monte Carlo codes can be applied to both the forward and adjoint simulations. Also, the physical meaning of the black absorber perturbation is interpreted based on surface crossing and is numerically validated. In addition, the immediate reflection at the intermediate surface with a controlled direction change is investigated within the invariance framework. This approach can be advantageous for a void streaming problem.

  9. Canonical quantization of lattice Higgs-Maxwell-Chern-Simons fields: Krein Self-adjointness

    SciTech Connect

    Bowman, Daniel A.; Challifour, John L.

    2006-10-15

    It is shown how techniques from constructive quantum field theory may be applied to indefinite metric gauge theories in Hilbert space for the case of a Higgs-Maxwell-Chern-Simons theory on a lattice. The Hamiltonian operator is shown to be Krein essentially self-adjoint by means of unbounded but Krein unitary transformations relating the Hamiltonian to an essentially maximal accretive operator.

  10. Using adjoint-based optimization to study wing flexibility in flapping flight

    NASA Astrophysics Data System (ADS)

    Wei, Mingjun; Xu, Min; Dong, Haibo

    2014-11-01

    In the study of flapping-wing flight of birds and insects, it is important to understand the impact of wing flexibility/deformation on aerodynamic performance. However, the large control space from the complexity of wing deformation and kinematics makes usual parametric study very difficult or sometimes impossible. Since the adjoint-based approach for sensitivity study and optimization strategy is a process with its cost independent of the number of input parameters, it becomes an attractive approach in our study. Traditionally, adjoint equation and sensitivity are derived in a fluid domain with fixed solid boundaries. Moving boundary is only allowed when its motion is not part of control effort. Otherwise, the derivation becomes either problematic or too complex to be feasible. Using non-cylindrical calculus to deal with boundary deformation solves this problem in a very simple and still mathematically rigorous manner. Thus, it allows to apply adjoint-based optimization in the study of flapping wing flexibility. We applied the ``improved'' adjoint-based method to study the flexibility of both two-dimensional and three-dimensional flapping wings, where the flapping trajectory and deformation are described by either model functions or real data from the flight of dragonflies. Supported by AFOSR.

  11. On the proper treatment of grid sensitivities in continuous adjoint methods for shape optimization

    NASA Astrophysics Data System (ADS)

    Kavvadias, I. S.; Papoutsis-Kiachagias, E. M.; Giannakoglou, K. C.

    2015-11-01

    The continuous adjoint method for shape optimization problems, in flows governed by the Navier-Stokes equations, can be formulated in two different ways, each of which leads to a different expression for the sensitivity derivatives of the objective function with respect to the control variables. The first formulation leads to an expression including only boundary integrals; it, thus, has low computational cost but, when used with coarse grids, its accuracy becomes questionable. The second formulation comprises a sum of boundary and field integrals; due to the field integrals, it has noticeably higher computational cost, obtaining though higher accuracy. In this paper, the equivalence of the two formulations is revisited from the mathematical and, particularly, the numerical point of view. Internal and external aerodynamics cases, in which the objective function is either the total pressure losses or the force exerted on a solid body, are examined and differences in the computed gradients are discussed. After identifying the reason behind these discrepancies, the adjoint formulation is enhanced by the adjoint to a (hypothetical) grid displacement model and the new approach is proved to reproduce the accuracy of the second adjoint formulation while maintaining the low cost of the first one.

  12. Iterative solution of multiple radiation and scattering problems in structural acoustics using the BL-QMR algorithm

    SciTech Connect

    Malhotra, M.

    1996-12-31

    Finite-element discretizations of time-harmonic acoustic wave problems in exterior domains result in large sparse systems of linear equations with complex symmetric coefficient matrices. In many situations, these matrix problems need to be solved repeatedly for different right-hand sides, but with the same coefficient matrix. For instance, multiple right-hand sides arise in radiation problems due to multiple load cases, and also in scattering problems when multiple angles of incidence of an incoming plane wave need to be considered. In this talk, we discuss the iterative solution of multiple linear systems arising in radiation and scattering problems in structural acoustics by means of a complex symmetric variant of the BL-QMR method. First, we summarize the governing partial differential equations for time-harmonic structural acoustics, the finite-element discretization of these equations, and the resulting complex symmetric matrix problem. Next, we sketch the special version of BL-QMR method that exploits complex symmetry, and we describe the preconditioners we have used in conjunction with BL-QMR. Finally, we report some typical results of our extensive numerical tests to illustrate the typical convergence behavior of BL-QMR method for multiple radiation and scattering problems in structural acoustics, to identify appropriate preconditioners for these problems, and to demonstrate the importance of deflation in block Krylov-subspace methods. Our numerical results show that the multiple systems arising in structural acoustics can be solved very efficiently with the preconditioned BL-QMR method. In fact, for multiple systems with up to 40 and more different right-hand sides we get consistent and significant speed-ups over solving the systems individually.

  13. Adjoint distributed catchment modelling for flood impact of rural land use and management change

    NASA Astrophysics Data System (ADS)

    O'Donnell, G. M.; Ewen, J.; O'Connell, P. E.

    2010-12-01

    Understanding the impact that changes in land use and management (LUM) can have on downstream flooding is a significant research challenge that requires a distributed physically-based modelling approach. A key issue in this regard is how to understand the role of the river channel network in propagating the effects of changes in runoff generation downstream to flood sites. The effects of LUM changes can be analysed as if they are perturbations in properties or rates that cause perturbations in flow to propagate through the network. A novel approach has been developed that computes the sensitivity of an impact (for example the impact on a flood level) to upstream perturbations. This is achieved using an adjoint hydraulic model of the channel network that computes sensitivities using algorithmic differentiation. The hydraulic model provides a detailed representation of the drainage network, based on field surveys of channel cross sections and channel roughness, and is linked to runoff generation elements (grid squares). Various sensitivities can be calculated, including sensitivities to perturbations in runoff generation parameters, thus providing some insight into the link between impact and the parameterisation used for runoff generation, and perturbation in the rate of lateral inflow to the network, as can be calculated using expert knowledge on the local effects of LUM on runoff from agricultural fields and hillslopes. The resulting sensitivities may be decomposed and presented as maps that show the relationship between perturbations and impacts, giving valuable insight into the link between cause and effects. Results are provided for the Hodder catchment, NW England (260 sq. km), which is undergoing large-scale changes in LUM. The application focuses on the role of hydrodynamic and geomorphologic dispersion in attenuating perturbations in network flow that result from perturbations to lateral inflows of the types expected if changes in LUM alter the timing or

  14. Gradient descent learning algorithm overview: a general dynamical systems perspective.

    PubMed

    Baldi, P

    1995-01-01

    Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning. PMID:18263297

  15. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  16. Skyshine analysis using energy and angular dependent dose-contribution fluxes obtained from air-over-ground adjoint calculation.

    PubMed

    Uematsu, Mikio; Kurosawa, Masahiko

    2005-01-01

    A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities. PMID:16604693

  17. The continuous adjoint approach to the k-ω SST turbulence model with applications in shape optimization

    NASA Astrophysics Data System (ADS)

    Kavvadias, I. S.; Papoutsis-Kiachagias, E. M.; Dimitrakopoulos, G.; Giannakoglou, K. C.

    2015-11-01

    In this article, the gradient of aerodynamic objective functions with respect to design variables, in problems governed by the incompressible Navier-Stokes equations coupled with the k-ω SST turbulence model, is computed using the continuous adjoint method, for the first time. Shape optimization problems for minimizing drag, in external aerodynamics (flows around isolated airfoils), or viscous losses in internal aerodynamics (duct flows) are considered. Sensitivity derivatives computed with the proposed adjoint method are compared to those computed with finite differences or a continuous adjoint variant based on the frequently used assumption of frozen turbulence; the latter proves the need for differentiating the turbulence model. Geometries produced by optimization runs performed with sensitivities computed by the proposed method and the 'frozen turbulence' assumption are also compared to quantify the gain from formulating and solving the adjoint to the turbulence model equations.

  18. JEF-2 data check by reanalysis of the Rossendorf experiments in reactor configurations with specially designed adjoint spectra

    SciTech Connect

    Dietze, K.; Fort, E.; Rahlfs, S.; Rimpault, G.; Salvatores, M.

    1994-12-31

    The Rossendorf RRRJSEG configurations characterized by, energy-independent or continuously rising adjoint spectra have been recalculated using the full European scheme JEF2/ECCO/ERANOS. C/E-values are given for structural materials and fission product nuclides using the results of sample reactivity measurements at the central position of these configurations. Due to the specially designed adjoint spectra, capture or scattering cross- sections can be checked separately. Recommendations for data corrections are given based on perturbation theory calculations.

  19. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms

    NASA Astrophysics Data System (ADS)

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.

  20. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    NASA Astrophysics Data System (ADS)

    Goldberg, Daniel N.; Krishna Narayanan, Sri Hari; Hascoet, Laurent; Utke, Jean

    2016-05-01

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enabling larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. The methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.

  1. Active adjoint modeling method in microwave induced thermoacoustic tomography for breast tumor.

    PubMed

    Zhu, Xiaozhang; Zhao, Zhiqin; Wang, Jinguo; Chen, Guoping; Liu, Qing Huo

    2014-07-01

    To improve the model-based inversion performance of microwave induced thermoacoustic tomography for breast tumor imaging, an active adjoint modeling (AAM) method is proposed. It aims to provide a more realistic breast acoustic model used for tumor inversion as the background by actively measuring and reconstructing the structural heterogeneity of human breast environment. It utilizes the reciprocity of acoustic sensors, and adapts the adjoint tomography method from seismic exploration. With the reconstructed acoustic model of breast environment, the performance of model-based inversion method such as time reversal mirror is improved significantly both in contrast and accuracy. To prove the advantage of AAM, a checkerboard pattern model and anatomical realistic breast models have been used in full wave numerical simulations. PMID:24956614

  2. Adjoint Airfoil Optimization of Darrieus-Type Vertical Axis Wind Turbine

    NASA Astrophysics Data System (ADS)

    Fuchs, Roman; Nordborg, Henrik

    2012-11-01

    We present the feasibility of using an adjoint solver to optimize the torque of a Darrieus-type vertical axis wind turbine (VAWT). We start with a 2D cross section of a symmetrical airfoil and restrict us to low solidity ratios to minimize blade vortex interactions. The adjoint solver of the ANSYS FLUENT software package computes the sensitivities of airfoil surface forces based on a steady flow field. Hence, we find the torque of a full revolution using a weighted average of the sensitivities at different wind speeds and angles of attack. The weights are computed analytically, and the range of angles of attack is given by the tip speed ratio. Then the airfoil geometry is evolved, and the proposed methodology is evaluated by transient simulations.

  3. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  4. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  5. Adjoint-based optimal control of an airfoil in gusting flows

    NASA Astrophysics Data System (ADS)

    Choi, Jeesoon; Colonius, Tim; California Institute of Technology Team

    2015-11-01

    In this study, we apply optimal control to an airfoil in gusting flow to investigate the possibility of extracting energy. The gradients of an objective function are obtained via the adjoint method and used to minimize the cost. The immersed boundary projection method is used for our forward solver, and the relevant adjoint equations are derived by the discrete-then-differentiate approach. Translational gusts are generated by a body force in the computational domain upstream to the body, and the method finds the optimal angles of the airfoil that exploits the greatest amount of energy. The influence of a vortex traversing an airfoil is also investigated and optimized to reduce the fluctuating lift.

  6. Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2013-01-01

    This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.

  7. Theoretical analysis and numerical experiments of variational adjoint approach for refractivity estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-Feng; Huang, Si-Xun; Du, Hua-Dong

    2011-02-01

    This paper puts forward possibilities of refractive index profile retrieval using field measurements at an array of radio receivers in terms of variational adjoint approach. The derivation of the adjoint model begins with the parabolic wave equation for a smooth, perfectly conducting surface and horizontal polarization conditions. To deal with the ill-posed difficulties of the inversion, the regularization ideas are introduced into the establishment of the cost function. Based on steepest descent iterations, the optimal value of refractivity could be retrieved quickly at each point over height. Numerical experiments demonstrate that the method works well for low-distance signals, while it is not accurate enough for long-distance propagations. Through curve fitting processing, however, giving a good initial refractivity profile could generally improve the inversions.

  8. Self-adjoint elliptic operators with boundary conditions on not closed hypersurfaces

    NASA Astrophysics Data System (ADS)

    Mantile, Andrea; Posilicano, Andrea; Sini, Mourad

    2016-07-01

    The theory of self-adjoint extensions of symmetric operators is used to construct self-adjoint realizations of a second-order elliptic differential operator on Rn with linear boundary conditions on (a relatively open part of) a compact hypersurface. Our approach allows to obtain Kreĭn-like resolvent formulae where the reference operator coincides with the "free" operator with domain H2 (Rn); this provides an useful tool for the scattering problem from a hypersurface. Concrete examples of this construction are developed in connection with the standard boundary conditions, Dirichlet, Neumann, Robin, δ and δ‧-type, assigned either on a (n - 1) dimensional compact boundary Γ = ∂ Ω or on a relatively open part Σ ⊂ Γ. Schatten-von Neumann estimates for the difference of the powers of resolvents of the free and the perturbed operators are also proven; these give existence and completeness of the wave operators of the associated scattering systems.

  9. Realization of Center Symmetry in Two Adjoint Flavor Large-N Yang-Mills

    SciTech Connect

    Catterall, Simon; Galvez, Richard; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2010-08-26

    We report on the results of numerical simulations of SU(N) lattice Yang Mills with two flavors of (light) Wilson fermion in the adjoint representation. We analytically and numerically address the question of center symmetry realization on lattices with {Lambda} sites in each direction in the large-N limit. We show, by a weak coupling calculation that, for massless fermions, center symmetry realization is independent of {Lambda}, and is unbroken. Then, we extend our result by conducting simulations at non zero mass and finite gauge coupling. Our results indicate that center symmetry is intact for a range of fermion mass in the vicinity of the critical line on lattices of volume 2{sup 4}. This observation makes it possible to compute infinite volume physical observables using small volume simulations in the limit N {yields} {infinity}, with possible applications to the determination of the conformal window in gauge theories with adjoint fermions.

  10. Artificial neural network-genetic algorithm based optimization for the adsorption of methylene blue and brilliant green from aqueous solution by graphite oxide nanoparticle.

    PubMed

    Ghaedi, M; Zeinali, N; Ghaedi, A M; Teimuori, M; Tashkhourian, J

    2014-05-01

    In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH=7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R(2)) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the

  11. Artificial neural network-genetic algorithm based optimization for the adsorption of methylene blue and brilliant green from aqueous solution by graphite oxide nanoparticle

    NASA Astrophysics Data System (ADS)

    Ghaedi, M.; Zeinali, N.; Ghaedi, A. M.; Teimuori, M.; Tashkhourian, J.

    In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH = 7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R2) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the

  12. Second-order adjoint sensitivity analysis methodology (2nd-ASAM) for computing exactly and efficiently first- and second-order sensitivities in large-scale linear systems: II. Illustrative application to a paradigm particle diffusion problem

    NASA Astrophysics Data System (ADS)

    Cacuci, Dan G.

    2015-03-01

    This work presents an illustrative application of the second-order adjoint sensitivity analysis methodology (2nd-ASAM) to a paradigm neutron diffusion problem, which is sufficiently simple to admit an exact solution, thereby making transparent the underlying mathematical derivations. The general theory underlying 2nd-ASAM indicates that, for a physical system comprising Nα parameters, the computation of all of the first- and second-order response sensitivities requires (per response) at most (2Nα + 1) "large-scale" computations using the first-level and, respectively, second-level adjoint sensitivity systems (1st-LASS and 2nd-LASS). Very importantly, however, the illustrative application presented in this work shows that the actual number of adjoint computations needed for computing all of the first- and second-order response sensitivities may be significantly less than (2Nα + 1) per response. For this illustrative problem, four "large-scale" adjoint computations sufficed for the complete and exact computations of all 4 first- and 10 distinct second-order derivatives. Furthermore, the construction and solution of the 2nd-LASS requires very little additional effort beyond the construction of the adjoint sensitivity system needed for computing the first-order sensitivities. Very significantly, only the sources on the right-sides of the diffusion (differential) operator needed to be modified; the left-side of the differential equations (and hence the "solver" in large-scale practical applications) remained unchanged. All of the first-order relative response sensitivities to the model parameters have significantly large values, of order unity. Also importantly, most of the second-order relative sensitivities are just as large, and some even up to twice as large as the first-order sensitivities. In the illustrative example presented in this work, the second-order sensitivities contribute little to the response variances and covariances. However, they have the

  13. Sensitivity of temporal moments calculated by the adjoint-state method and joint inversing of head and tracer data

    NASA Astrophysics Data System (ADS)

    Cirpka, Olaf A.; Kitanidis, Peter K.

    Including tracer data into geostatistically based methods of inverse modeling is computationally very costly when all concentration measurements are used and the sensitivities of many observations are calculated by the direct differentiation approach. Harvey and Gorelick (Water Resour Res 1995;31(7):1615-26) have suggested the use of the first temporal moment instead of the complete concentration record at a point. We derive a computationally efficient adjoint-state method for the sensitivities of the temporal moments that require the solution of the steady-state flow equation and two steady-state transport equations for the forward problem and the same number of equations for each first-moment measurement. The efficiency of the method makes it feasible to evaluate the sensitivity matrix many times in large domains. We incorporate our approach for the calculation of sensitivities in the quasi-linear geostatistical method of inversing ("iterative cokriging"). The application to an artificial example of a tracer introduced into an injection well shows good convergence behavior when both head and first-moment data are used for inversing, whereas inversing of arrival times alone is less stable.

  14. Seismic wave-speed structure beneath the metropolitan area of Japan based on adjoint tomography

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Obayashi, M.; Tono, Y.; Tsuboi, S.

    2015-12-01

    We have obtained a three-dimensional (3D) model of seismic wave-speed structure beneath the metropolitan area of Japan. We applied the spectral-element method (e.g. Komatitsch and Tromp 1999) and adjoint method (Liu and Tromp 2006) to the broadband seismograms in order to infer the 3D model. We used the travel-time tomography result (Matsubara and Obara 2011) as an initial 3D model and used broadband waveforms recorded at the NIED F-net stations. We selected 147 earthquakes with magnitude of larger than 4.5 from the F-net earthquake catalog and used their bandpass filtered seismograms between 5 and 20 second with a high S/N ratio. The 3D model used for the forward and adjoint simulations is represented as a region of approximately 500 by 450 km in horizontal and 120 km in depth. Minimum period of theoretical waveforms was 4.35 second. For the adjoint inversion, we picked up the windows of the body waves from the observed and theoretical seismograms. We used SPECFEM3D_Cartesian code (e.g. Peter et al. 2011) for the forward and adjoint simulations, and their simulations were implemented by K-computer in RIKEN. Each iteration required about 0.1 million CPU hours at least. The model parameters of Vp and Vs were updated by using the steepest descent method. We obtained the fourth iterative model (M04), which reproduced observed waveforms better than the initial model. The shear wave-speed of M04 was significantly smaller than the initial model at any depth. The model of compressional wave-speed was not improved by inversion because of small alpha kernel values. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We thank to the NIED for providing seismological data.

  15. Towards Adjoint Finite Source Inversion: Application to the 2011 M9 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Somala, S.; Galvez, P.; Inbal, A.; Ampuero, J. P.; Lapusta, N.

    2011-12-01

    The recent 2011 M9 Tohoku, Japan, earthquake was recorded by thousands of sensors at near-fault distance, including broad band, strong motion and continuous GPS sensors. This event provides a unique opportunity to image the earthquake rupture process with high resolution. In order to enable the exploitation of the immense dataset available, orders of magnitude larger than in previous earthquakes, we are developing a scalable source inversion procedure based on time-reversal adjoint inversion. We adopt the linear least squares formulation of the source inversion problem, whose basic unknown is the spatio-temporal distribution of slip rate. We formulate an iterative conjugate gradient procedure to minimize the L2 norm of ground velocity residuals between data and synthetics. Each iteration involves one time-reversal (adjoint) and one forward simulation. Exploiting the time-reversal symmetry and the reciprocity principle of elastodynamics, the adjoint is computed by a wave propagation simulation in which time-reversed seismogram residuals are imposed as point forces at the stations simulated. The resulting fault tractions on a locked fault are the adjoint fields, related to the gradient of the misfit function with respect to the model. The simulations are performed with a recent extension of the SPECFEM3D spectral element code to dynamic and kinematic finite sources on unstructured meshes (Galvez et al, session S24 of this meeting). The non-planar geometry of the megathrust fault is accounted for in the spectral element mesh (generated with CUBIT). The subsurface structure is incorporated, on a coarse scale, using regional 3D velocity models, e.g. from the Japan Seismic Hazard Information Station (J-SHIS) website. We will report on the results of our initial efforts, focused on exploiting the continuous 1 Hz GPS signals recorded in Japan to understand the low frequency aspects of the rupture process of the 2011 Tohoku earthquake.

  16. Some results on the dynamics and transition probabilities for non self-adjoint hamiltonians

    SciTech Connect

    Bagarello, F.

    2015-05-15

    We discuss systematically several possible inequivalent ways to describe the dynamics and the transition probabilities of a quantum system when its hamiltonian is not self-adjoint. In order to simplify the treatment, we mainly restrict our analysis to finite dimensional Hilbert spaces. In particular, we propose some experiments which could discriminate between the various possibilities considered in the paper. An example taken from the literature is discussed in detail.

  17. Inequivalence of unitarity and self-adjointness: An example in quantum cosmology

    SciTech Connect

    Lemos, N.A. )

    1990-02-15

    An example of a quantum cosmological model is presented whose dynamics is unitary although the time-dependent Hamiltonian operator fails to be self-adjoint (because it is not defined) for a particular value of {ital t}. The model is shown to be singular, and this disproves a conjecture put forward by Gotay and Demaret to the effect that unitary quantum dynamics in a slow-time'' gauge is always nonsingular.

  18. Toward a Comprehensive Carbon Budget for North America: Potential Applications of Adjoint Methods with Diverse Datasets

    NASA Technical Reports Server (NTRS)

    Andrews, A.

    2002-01-01

    A detailed mechanistic understanding of the sources and sinks of CO2 will be required to reliably predict future COS levels and climate. A commonly used technique for deriving information about CO2 exchange with surface reservoirs is to solve an "inverse problem," where CO2 observations are used with an atmospheric transport model to find the optimal distribution of sources and sinks. Synthesis inversion methods are powerful tools for addressing this question, but the results are disturbingly sensitive to the details of the calculation. Studies done using different atmospheric transport models and combinations of surface station data have produced substantially different distributions of surface fluxes. Adjoint methods are now being developed that will more effectively incorporate diverse datasets in estimates of surface fluxes of CO2. In an adjoint framework, it will be possible to combine CO2 concentration data from long-term surface monitoring stations with data from intensive field campaigns and with proposed future satellite observations. A major advantage of the adjoint approach is that meteorological and surface data, as well as data for other atmospheric constituents and pollutants can be efficiently included in addition to observations of CO2 mixing ratios. This presentation will provide an overview of potentially useful datasets for carbon cycle research in general with an emphasis on planning for the North American Carbon Project. Areas of overlap with ongoing and proposed work on air quality/air pollution issues will be highlighted.

  19. Source attribution of particulate matter pollution over North China with the adjoint method

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Liu, Licheng; Zhao, Yuanhong; Gong, Sunling; Zhang, Xiaoye; Henze, Daven K.; Capps, Shannon L.; Fu, Tzung-May; Zhang, Qiang; Wang, Yuxuan

    2015-08-01

    We quantify the source contributions to surface PM2.5 (fine particulate matter) pollution over North China from January 2013 to 2015 using the GEOS-Chem chemical transport model and its adjoint with improved model horizontal resolution (1/4° × 5/16°) and aqueous-phase chemistry for sulfate production. The adjoint method attributes the PM2.5 pollution to emissions from different source sectors and chemical species at the model resolution. Wintertime surface PM2.5 over Beijing is contributed by emissions of organic carbon (27% of the total source contribution), anthropogenic fine dust (27%), and SO2 (14%), which are mainly from residential and industrial sources, followed by NH3 (13%) primarily from agricultural activities. About half of the Beijing pollution originates from sources outside of the city municipality. Adjoint analyses for other cities in North China all show significant regional pollution transport, supporting a joint regional control policy for effectively mitigating the PM2.5 air pollution.

  20. Adjoint-based deviational Monte Carlo methods for phonon transport calculations

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.

    2015-06-01

    In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.

  1. Self-adjointness of the Fourier expansion of quantized interaction field Lagrangians

    PubMed Central

    Paneitz, S. M.; Segal, I. E.

    1983-01-01

    Regularity properties significantly stronger than were previously known are developed for four-dimensional non-linear conformally invariant quantized fields. The Fourier coefficients of the interaction Lagrangian in the interaction representation—i.e., evaluated after substitution of the associated quantized free field—is a densely defined operator on the associated free field Hilbert space K. These Fourier coefficients are with respect to a natural basis in the universal cosmos ˜M, to which such fields canonically and maximally extend from Minkowski space-time M0, which is covariantly a submanifold of ˜M. However, conformally invariant free fields over M0 and ˜M are canonically identifiable. The kth Fourier coefficient of the interaction Lagrangian has domain inclusive of all vectors in K to which arbitrary powers of the free hamiltonian in ˜M are applicable. Its adjoint in the rigorous Hilbert space sense is a-k in the case of a hermitian Lagrangian. In particular (k = 0) the leading term in the perturbative expansion of the S-matrix for a conformally invariant quantized field in M0 is a self-adjoint operator. Thus, e.g., if ϕ(x) denotes the free massless neutral scalar field in M0, then ∫M0:ϕ(x)4:d4x is a self-adjoint operator. No coupling constant renormalization is involved here. PMID:16593346

  2. Neural network training by integration of adjoint systems of equations forward in time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1992-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  3. Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1999-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  4. Strong Adjoint Sensitivities in Tropical Eddy-Permitting Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cornuelle, B.; Hoteit, I.; Koehl, A.; Stammer, D.

    2007-05-01

    A variational data assimilation system has been implemented for the tropical Pacific Ocean for an eddy-permitting regional implementation of the MIT general circulation model (MITgcm). The model uses realistic topography with parameterizations for the surface boundary layer (KPP) and open boundaries at the south and north, as well as in the Indonesian throughflow. The adjoint method is used to adjust the model to observations in the tropical Pacific region using control parameters which include initial temperature and salinity, temperature, salinity and horizontal velocities at the open boundaries, and twice-daily surface fluxes of momentum, heat and freshwater. The model is constrained with most of the available datasets in the tropical Pacific, including climatologies, TAO, ARGO, XBT, and satellite SST and SSH data. The forward model runs exhibit strongly growing flow instabilities in the regions of high kinetic energy and low planetary potential vorticity gradient. The growth of these perturbations is limited by nonlinearities once they reach finite size, meaning that the high linear growth rates do not apply for long time periods. This poses a technical problem for adjoint-based assimilation, which depends on the linearized sensitivities to adjust the controls. Relative to the forward model runs, increased viscosity and diffusivity terms are used in the adjoint model runs to avoid large sensitivities related to the flow instabilities present in the high-resolution model. This talk will discuss some of the technical aspects and show results for 1 year assimilation period.

  5. On the analytic solution of the Balitsky-Kovchegov evolution equation

    NASA Astrophysics Data System (ADS)

    Bondarenko, Sergey; Prygarin, Alex

    2015-06-01

    The study presents an analytic solution of the Balitsky-Kovchegov (BK) equa-tion in a particular kinematics. The solution is written in the momentum space and based on the eigenfunctions of the truncated Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation in the gauge adjoint representation, which was used for calculation of the Regge (Mandel-stam) cut contribution to the planar helicity amplitudes. We introduce an eigenfunction of the singlet BFKL equation constructed of the adjoint eigenfunction multiplied by a factor, which restores the dual conformal symmetry present in the adjoint and broken in the sin-glet BFKL equations. The proposed analytic BK solution correctly reproduces the initial condition and the high energy asymptotics of the scattering amplitude.

  6. Group Invariant Solutions and Conservation Laws of the Fornberg- Whitham Equation

    NASA Astrophysics Data System (ADS)

    Hashemi, Mir Sajjad; Haji-Badali, Ali; Vafadar, Parisa

    2014-09-01

    In this paper, we utilize the Lie symmetry analysis method to calculate new solutions for the Fornberg-Whitham equation (FWE). Applying a reduction method introduced by M. C. Nucci, exact solutions and first integrals of reduced ordinary differential equations (ODEs) are considered. Nonlinear self-adjointness of the FWE is proved and conserved vectors are computed

  7. Investigating Sensitivity to Saharan Dust in Tropical Cyclone Formation Using Nasa's Adjoint Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel

    2015-01-01

    As tropical cyclones develop from easterly waves coming of the coast of Africa they interact with dust from the Sahara desert. There is a long standing debate over whether this dust inhibits or advances the developing storm and how much influence it has. Dust can surround the storm and absorb incoming solar radiation, cooling the air below. As a result an energy source for the system is potentially diminished, inhibiting growth of the storm. Alternatively dust may interact with clouds through micro-physical processes, for example by causing more moisture to condense, potentially increasing the strength. As a result of climate change, concentrations and amount of dust in the atmosphere will likely change. It it is important to properly understand its effect on tropical storm formation. The adjoint of an atmospheric general circulation model provides a very powerful tool for investigating sensitivity to initial conditions. The National Aeronautics and Space Administration (NASA) has recently developed an adjoint version of the Goddard Earth Observing System version 5 (GEOS-5) dynamical core, convection scheme, cloud model and radiation schemes. This is extended so that the interaction between dust and radiation is also accounted for in the adjoint model. This provides a framework for examining the sensitivity to dust in the initial conditions. Specifically the set up allows for an investigation into the extent to which dust affects cyclone strength through absorption of radiation. In this work we investigate the validity of using an adjoint model for examining sensitivity to dust in hurricane formation. We present sensitivity results for a number of systems that developed during the Atlantic hurricane season of 2006. During this period there was a significant outbreak of Saharan dust and it is has been argued that this outbreak was responsible for the relatively calm season. This period was also covered by an extensive observation campaign. It is shown that the

  8. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms.

    PubMed

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R(2)) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines. PMID:26241829

  9. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  10. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and

  11. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  12. The forward and adjoint sensitivity methods of glacial isostatic adjustment: Existence, uniqueness and time-differencing scheme

    NASA Astrophysics Data System (ADS)

    Martinec, Zdenek; Sasgen, Ingo; Velimsky, Jakub

    2014-05-01

    In this study, two new methods for computing the sensitivity of the glacial isostatic adjustment (GIA) forward solution with respect to the Earth's mantle viscosity are presented: the forward sensitivity method (FSM) and the adjoint sensitivity method (ASM). These advanced formal methods are based on the time-domain,spectral-finite element method for modelling the GIA response of laterally heterogeneous earth models developed by Martinec (2000). There are many similarities between the forward method and the FSM and ASM for a general physical system. However, in the case of GIA, there are also important differences between the forward and sensitivity methods. The analysis carried out in this study results in the following findings. First, the forward method of GIA is unconditionally solvable, regardless of whether or not a combined ice and ocean-water load contains the first-degree spherical harmonics. This is also the case for the FSM, however, the ASM must in addition be supplemented by nine conditions on the misfit between the given GIA-related data and the forward model predictions to guarantee the existence of a solution. This constrains the definition of data least-squares misfit. Second, the forward method of GIA implements an ocean load as a free boundary-value function over an ocean area with a free geometry. That is, an ocean load and the shape of ocean, the so-called ocean function, are being sought, in addition to deformation and gravity-increment fields, by solving the forward method. The FSM and ASM also apply the adjoint ocean load as a free boundary-value function, but instead over an ocean area with the fixed geometry given by the ocean function determined by the forward method. In other words, a boundary-value problem for the forward method of GIA is free with respect to determining (i) the boundary-value data over an ocean area and (ii) the ocean function itself, while the boundary-value problems for the FSM and ASM are free only with respect to

  13. An adjoint view on flux consistency and strong wall boundary conditions to the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Stück, Arthur

    2015-11-01

    Inconsistent discrete expressions in the boundary treatment of Navier-Stokes solvers and in the definition of force objective functionals can lead to discrete-adjoint boundary treatments that are not a valid representation of the boundary conditions to the corresponding adjoint partial differential equations. The underlying problem is studied for an elementary 1D advection-diffusion problem first using a node-centred finite-volume discretisation. The defect of the boundary operators in the inconsistently defined discrete-adjoint problem leads to oscillations and becomes evident with the additional insight of the continuous-adjoint approach. A homogenisation of the discretisations for the primal boundary treatment and the force objective functional yields second-order functional accuracy and eliminates the defect in the discrete-adjoint boundary treatment. Subsequently, the issue is studied for aerodynamic Reynolds-averaged Navier-Stokes problems in conjunction with a standard finite-volume discretisation on median-dual grids and a strong implementation of noslip walls, found in many unstructured general-purpose flow solvers. Going out from a base-line discretisation of force objective functionals which is independent of the boundary treatment in the flow solver, two improved flux-consistent schemes are presented; based on either body wall-defined or farfield-defined control-volumes they resolve the dual inconsistency. The behaviour of the schemes is investigated on a sequence of grids in 2D and 3D.

  14. Development and application of the WRFPLUS-Chem online chemistry adjoint and WRFDA-Chem assimilation system

    NASA Astrophysics Data System (ADS)

    Guerrette, J. J.; Henze, D. K.

    2015-02-01

    Here we present the online meteorology and chemistry adjoint and tangent linear model, WRFPLUS-Chem, which incorporates modules to treat boundary layer mixing, emission, aging, dry deposition, and advection of black carbon aerosol. We also develop land surface and surface layer adjoints to account for coupling between radiation and vertical mixing. Model performance is verified against finite difference derivative approximations. A second order checkpointing scheme is created to reduce computational costs and enable simulations longer than six hours. The adjoint is coupled to WRFDA-Chem, in order to conduct a sensitivity study of anthropogenic and biomass burning sources throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. A cost function weighting scheme was devised to increase adjoint sensitivity robustness in future inverse modeling studies. Results of the sensitivity study show that, for this domain and time period, anthropogenic emissions are over predicted, while wildfire emissions are under predicted. We consider the diurnal variation in emission sensitivities to determine at what time sources should be scaled up or down. Also, adjoint sensitivities for two choices of land surface model indicate that emission inversion results would be sensitive to forward model configuration. The tools described here are the first step in conducting four-dimensional variational data assimilation in a coupled meteorology-chemistry model, which will potentially provide new constraints on aerosol precursor emissions and their distributions. Such analyses will be invaluable to assessments of particulate matter health and climate impacts.

  15. Technique for Calculating Solution Derivatives With Respect to Geometry Parameters in a CFD Code

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay

    2011-01-01

    A solution has been developed to the challenges of computation of derivatives with respect to geometry, which is not straightforward because these are not typically direct inputs to the computational fluid dynamics (CFD) solver. To overcome these issues, a procedure has been devised that can be used without having access to the mesh generator, while still being applicable to all types of meshes. The basic approach is inspired by the mesh motion algorithms used to deform the interior mesh nodes in a smooth manner when the surface nodes, for example, are in a fluid structure interaction problem. The general idea is to model the mesh edges and nodes as constituting a spring-mass system. Changes to boundary node locations are propagated to interior nodes by allowing them to assume their new equilibrium positions, for instance, one where the forces on each node are in balance. The main advantage of the technique is that it is independent of the volumetric mesh generator, and can be applied to structured, unstructured, single- and multi-block meshes. It essentially reduces the problem down to defining the surface mesh node derivatives with respect to the geometry parameters of interest. For analytical geometries, this is quite straightforward. In the more general case, one would need to be able to interrogate the underlying parametric CAD (computer aided design) model and to evaluate the derivatives either analytically, or by a finite difference technique. Because the technique is based on a partial differential equation (PDE), it is applicable not only to forward mode problems (where derivatives of all the output quantities are computed with respect to a single input), but it could also be extended to the adjoint problem, either by using an analytical adjoint of the PDE or a discrete analog.

  16. Exact Solutions and Conservation Laws for a New Integrable Equation

    SciTech Connect

    Gandarias, M. L.; Bruzon, M. S.

    2010-09-30

    In this work we study a generalization of an integrable equation proposed by Qiao and Liu from the point of view of the theory of symmetry reductions in partial differential equations. Among the solutions we obtain a travelling wave with decaying velocity and a smooth soliton solution. We determine the subclass of these equations which are quasi-self-adjoint and we get a nontrivial conservation law.

  17. Mapping Emissions that Contribute to Air Pollution Using Adjoint Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Bastien, L. A. J.; Mcdonald, B. C.; Brown, N. J.; Harley, R.

    2014-12-01

    The adjoint of the Community Multiscale Air Quality model (CMAQ) is used to map emissions that contribute to air pollution at receptors of interest. Adjoint tools provide an efficient way to calculate the sensitivity of a model response to a large number of model inputs, a task that would require thousands of simulations using a more traditional forward sensitivity approach. Initial applications of this technique, demonstrated here, are to benzene and directly-emitted diesel particulate matter, for which atmospheric reactions are neglected. Emissions of these pollutants are strongly influenced by light-duty gasoline vehicles and heavy-duty diesel trucks, respectively. We study air quality responses in three receptor areas where populations have been identified as especially susceptible to, and adversely affected by air pollution. Population-weighted air basin-wide responses for each pollutant are also evaluated for the entire San Francisco Bay area. High-resolution (1 km horizontal grid) emission inventories have been developed for on-road motor vehicle emission sources, based on observed traffic count data. Emission estimates represent diurnal, day of week, and seasonal variations of on-road vehicle activity, with separate descriptions for gasoline and diesel sources. Emissions that contribute to air pollution at each receptor have been mapped in space and time using the adjoint method. Effects on air quality of both relative (multiplicative) and absolute (additive) perturbations to underlying emission inventories are analyzed. The contributions of local versus upwind sources to air quality in each receptor area are quantified, and weekday/weekend and seasonal variations in the influence of emissions from upwind areas are investigated. The contribution of local sources to the total air pollution burden within the receptor areas increases from about 40% in the summer to about 50% in the winter due to increased atmospheric stagnation. The effectiveness of control

  18. A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  19. Imaging the slab beneath central Chile using the Spectral Elements Method and adjoint techniques

    NASA Astrophysics Data System (ADS)

    Mercerat, E. D.; Nolet, G.; Marot, M.; Deshayes, P.; Monfret, T.

    2010-12-01

    This work focuses on imaging the subducting slab beneath Central Chile using novel inversion techniques based on the adjoint method and accurate wave propagation simulations using the Spectral Elements Method. The study area comprises the flat slab portion of the Nazca plate between 29 S and 34 S subducting beneath South America. We will use a database of regional seismicity consisting of both crustal and deep slab earthquakes with magnitude 3 < Mw < 6 recorded by different temporary and permanent seismological networks. Our main goal is to determine both the kinematics and the geometry of the subducting slab in order to help the geodynamical interpretation of such particular active margin. The Spectral Elements Method (SPECFEM3D code) is used to generate the synthetic seismograms and it will be applied for the iterative minimization based on adjoint techniques. The numerical mesh is 600 km x 600 km in horizontal coordinates and 220 km depth. As a first step, we are faced to well-known issues concerning mesh generation (resolution, quality, absorbing boundary conditions). In particular, we must evaluate the influence of free surface topography, as well as the MOHO and other geological interfaces in the synthetic seismograms. The initial velocity model from a previous travel-time tomography study, is linearly interpolated to the Gauss-Lobatto-Legendre grid. The comparison between the first forward simulations (up to 4 seconds minimum period) validate the initial velocity model of the study area, although many features not reproduced by the initial model have already been identified. Next step will concentrate in the comparison between finite-frequency kernels calculated by travel-time methods with ones based on adjoint methods, in order to highlight advantages and disadvantages in terms of resolution, accuracy, but also computational cost.

  20. Approximations of Strongly Continuous Families of Unbounded Self-Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Ben-Artzi, Jonathan; Holding, Thomas

    2016-07-01

    The problem of approximating the discrete spectra of families of self-adjoint operators that are merely strongly continuous is addressed. It is well-known that the spectrum need not vary continuously (as a set) under strong perturbations. However, it is shown that under an additional compactness assumption the spectrum does vary continuously, and a family of symmetric finite-dimensional approximations is constructed. An important feature of these approximations is that they are valid for the entire family uniformly. An application of this result to the study of plasma instabilities is illustrated.

  1. Self-adjointness and the Casimir effect with confined quantized spinor matter

    NASA Astrophysics Data System (ADS)

    Sitenko, Yurii A.

    2016-01-01

    A generalization of the MIT bag boundary condition for spinor matter is proposed basing on the requirement that the Dirac hamiltonian operator be self-adjoint. An influence of a background magnetic field on the vacuum of charged spinor matter confined between two parallel material plates is studied. Employing the most general set of boundary conditions at the plates in the case of the uniform magnetic field directed orthogonally to the plates, we find the pressure from the vacuum onto the plates. In physically plausible situations, the Casimir effect is shown to be repulsive, independently of a choice of boundary conditions and of a distance between the plates.

  2. Approximations of Strongly Continuous Families of Unbounded Self-Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Ben-Artzi, Jonathan; Holding, Thomas

    2016-05-01

    The problem of approximating the discrete spectra of families of self-adjoint operators that are merely strongly continuous is addressed. It is well-known that the spectrum need not vary continuously (as a set) under strong perturbations. However, it is shown that under an additional compactness assumption the spectrum does vary continuously, and a family of symmetric finite-dimensional approximations is constructed. An important feature of these approximations is that they are valid for the entire family uniformly. An application of this result to the study of plasma instabilities is illustrated.

  3. Pricing of American style options with an adjoint process correction method

    NASA Astrophysics Data System (ADS)

    Jaekel, Uwe

    2005-07-01

    Pricing of American options is a more complicated problem than pricing of European options. In this work a formula is derived that allows the computation of the early exercise premium, i.e. the price difference between these two option types in terms of an adjoint process evolving in the reversed time direction of the original process determining the evolution of the European price. We show how this equation can be utilised to improve option price estimates from numerical schemes like finite difference or Monte Carlo methods.

  4. Ginsparg-Wilson relation on a fuzzy 2-sphere for adjoint matter

    SciTech Connect

    Aoki, Hajime

    2010-10-15

    We formulate a Ginsparg-Wilson relation on a fuzzy 2-sphere for matter in the adjoint representation of the gauge group. Because of the Ginsparg-Wilson relation, an index theorem is satisfied. Our formulation is applicable to topologically nontrivial configurations as monopoles. It gives a solid basis for obtaining chiral fermions, which are an important ingredient of the standard model, from matrix model formulations of the superstring theory, such as the IIB matrix model, by considering topological configurations in the extra dimensions. We finally discuss whether this mechanism really works.

  5. Adjoint transport calculations for sensitivity analysis of the Hiroshima air-over-ground environment

    SciTech Connect

    Broadhead, B.L.; Cacuci, D.G.; Pace, J.V. III

    1984-01-01

    A major effort within the US Dose Reassessment Program is aimed at recalculating the transport of initial nuclear radiation in an air-over-ground environment. This paper is the first report of results from adjoint calculations in the Hiroshima air-over-ground environment. The calculations use a Hiroshima/Nagasaki multi-element ground, ENDF/B-V nuclear data, one-dimensional ANISN flux weighting for neutron and gamma cross sections, a source obtained by two-dimensional hydrodynamic and three-dimensional transport calculations, and best-estimate atmospheric conditions from Japanese sources. 7 references, 2 figures.

  6. Infrared fixed point in SU(2) gauge theory with adjoint fermions

    SciTech Connect

    DeGrand, Thomas; Shamir, Yigal; Svetitsky, Benjamin

    2011-04-01

    We apply Schroedinger-functional techniques to the SU(2) lattice gauge theory with N{sub f}=2 flavors of fermions in the adjoint representation. Our use of hypercubic smearing enables us to work at stronger couplings than did previous studies, before encountering a critical point and a bulk phase boundary. Measurement of the running coupling constant gives evidence of an infrared fixed point g{sub *} where 1/g{sub *}{sup 2}=0.20(4)(3). At the fixed point, we find a mass anomalous dimension {gamma}{sub m}(g{sub *})=0.31(6).

  7. An analytic solution for the orbital perturbations of the Venus Radar Mapper due to gravitational harmonics

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, A.

    1984-08-01

    Hill's variational equations are solved analytically for the orbital perturbations of a spacecraft nominally in an elliptic orbit around a non-spherical body. The rotation of the central planet about its spin-axis is not considered in the analysis. The perturbations are restricted to the planetary gravitational harmonics only. An extremely simple algorithm is derived to transform the spherical harmonic potentials to the orbital coordinate system, and the resulting accelerations are shown to be simply trigonometric functions of the true anomaly. With the principal matrix solution for the differential equations of the adjoint system given in closed form, the orthogonality of the trigonometric functions makes it possible to obtain an analytic solution for the non-homogeneous problem, at intervals of 2 pi in true anomaly. The solution for orbital perturbations can be extended over several revolutions by applying well-known results from Floquet's theory. The technique is demonstrated with results presented on the spacecraft periapsis altitude for the forthcoming Venus Radar Mapper Mission.

  8. An analytic solution for the orbital perturbations of the Venus Radar Mapper due to gravitational harmonics

    NASA Technical Reports Server (NTRS)

    Vijayaraghavan, A.

    1984-01-01

    Hill's variational equations are solved analytically for the orbital perturbations of a spacecraft nominally in an elliptic orbit around a non-spherical body. The rotation of the central planet about its spin-axis is not considered in the analysis. The perturbations are restricted to the planetary gravitational harmonics only. An extremely simple algorithm is derived to transform the spherical harmonic potentials to the orbital coordinate system, and the resulting accelerations are shown to be simply trigonometric functions of the true anomaly. With the principal matrix solution for the differential equations of the adjoint system given in closed form, the orthogonality of the trigonometric functions makes it possible to obtain an analytic solution for the non-homogeneous problem, at intervals of 2 pi in true anomaly. The solution for orbital perturbations can be extended over several revolutions by applying well-known results from Floquet's theory. The technique is demonstrated with results presented on the spacecraft periapsis altitude for the forthcoming Venus Radar Mapper Mission.

  9. Neutron noise calculations in a hexagonal geometry and comparison with analytical solutions

    SciTech Connect

    Tran, H. N.; Demaziere, C.

    2012-07-01

    This paper presents the development of a neutronic and kinetic solver for hexagonal geometries. The tool is developed based on the diffusion theory with multi-energy groups and multi-groups of delayed neutron precursors allowing the solutions of forward and adjoint problems of static and dynamic states, and is applicable to both thermal and fast systems with hexagonal geometries. In the dynamic problems, the small stationary fluctuations of macroscopic cross sections are considered as noise sources, and then the induced first order noise is calculated fully in the frequency domain. Numerical algorithms for solving the static and noise equations are implemented with a spatial discretization based on finite differences and a power iterative solution. A coarse mesh finite difference method has been adopted for speeding up the convergence. Since no other numerical tool could calculate frequency-dependent noise in hexagonal geometry, validation calculations have been performed and benchmarked to analytical solutions based on a 2-D homogeneous system with two-energy groups and one-group of delayed neutron precursor, in which point-like perturbations of thermal absorption cross section at central and non-central positions are considered as noise sources. (authors)

  10. An Adjoint-based Method for the Inversion of the Juno and Cassini Gravity Measurements into Wind Fields

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Kaspi, Yohai

    2016-04-01

    During 2016-17, the Juno and Cassini spacecraft will both perform close eccentric orbits of Jupiter and Saturn, respectively, obtaining high-precision gravity measurements for these planets. These data will be used to estimate the depth of the observed surface flows on these planets. All models to date, relating the winds to the gravity field, have been in the forward direction, thus only allowing the calculation of the gravity field from given wind models. However, there is a need to do the inverse problem since the new observations will be of the gravity field. Here, an inverse dynamical model is developed to relate the expected measurable gravity field, to perturbations of the density and wind fields, and therefore to the observed cloud-level winds. In order to invert the gravity field into the 3D circulation, an adjoint model is constructed for the dynamical model, thus allowing backward integration. This tool is used for the examination of various scenarios, simulating cases in which the depth of the wind depends on latitude. We show that it is possible to use the gravity measurements to derive the depth of the winds, both on Jupiter and Saturn, also taking into account measurement errors. Calculating the solution uncertainties, we show that the wind depth can be determined more precisely in the low-to-mid-latitudes. In addition, the gravitational moments are found to be particularly sensitive to flows at the equatorial intermediate depths. Therefore, we expect that if deep winds exist on these planets they will have a measurable signature by Juno and Cassini.

  11. Calculation of the response of cylindrical targets to collimated beams of particles using one-dimensional adjoint transport techniques. [LMFBR

    SciTech Connect

    Dupree, S. A.

    1980-06-01

    The use of adjoint techniques to determine the interaction of externally incident collimated beams of particles with cylindrical targets is a convenient means of examining a class of problems important in radiation transport studies. The theory relevant to such applications is derived, and a simple example involving a fissioning target is discussed. Results from both discrete ordinates and Monte Carlo transport-code calculations are presented, and comparisons are made with results obtained from forward calculations. The accuracy of the discrete ordinates adjoint results depends on the order of angular quadrature used in the calculation. Reasonable accuracy by using EQN quadratures can be expected from order S/sub 16/ or higher.

  12. Hilbert-Schmidt Inner Product for an Adjoint Representation of the Quantum Algebra U⌣Q(SU2)

    NASA Astrophysics Data System (ADS)

    Fakhri, Hossein; Nouraddini, Mojtaba

    2015-10-01

    The Jordan-Schwinger realization of quantum algebra U⌣q(su2) is used to construct the irreducible submodule Tl of the adjoint representation in two different bases. The two bases are known as types of irreducible tensor operators of rank l which are related to each other by the involution map. The bases of the submodules are equipped with q-analogues of the Hilbert-Schmidt inner product and it is also shown that the adjoint representation corresponding to one of those submodules is a *-representation.

  13. Application of adjoint Monte Carlo to accelerate simulations of mono-directional beams in treatment planning for Boron Neutron Capture Therapy

    SciTech Connect

    Nievaart, V. A.; Legrady, D.; Moss, R. L.; Kloosterman, J. L.; Hagen, T. H. J. J. van der; Dam, H. van

    2007-04-15

    This paper deals with the application of the adjoint transport theory in order to optimize Monte Carlo based radiotherapy treatment planning. The technique is applied to Boron Neutron Capture Therapy where most often mixed beams of neutrons and gammas are involved. In normal forward Monte Carlo simulations the particles start at a source and lose energy as they travel towards the region of interest, i.e., the designated point of detection. Conversely, with adjoint Monte Carlo simulations, the so-called adjoint particles start at the region of interest and gain energy as they travel towards the source where they are detected. In this respect, the particles travel backwards and the real source and real detector become the adjoint detector and adjoint source, respectively. At the adjoint detector, an adjoint function is obtained with which numerically the same result, e.g., dose or flux in the tumor, can be derived as with forward Monte Carlo. In many cases, the adjoint method is more efficient and by that is much quicker when, for example, the response in the tumor or organ at risk for many locations and orientations of the treatment beam around the patient is required. However, a problem occurs when the treatment beam is mono-directional as the probability of detecting adjoint Monte Carlo particles traversing the beam exit (detector plane in adjoint mode) in the negative direction of the incident beam is zero. This problem is addressed here and solved first with the use of next event estimators and second with the application of a Legendre expansion technique of the angular adjoint function. In the first approach, adjoint particles are tracked deterministically through a tube to a (adjoint) point detector far away from the geometric model. The adjoint particles will traverse the disk shaped entrance of this tube (the beam exit in the actual geometry) perpendicularly. This method is slow whenever many events are involved that are not contributing to the point

  14. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  15. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.

    2015-12-01

    Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  16. Source attribution of PM2.5 pollution over North China using the adjoint method

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Liu, L.; Zhao, Y.; Gong, S.; Henze, D. K.

    2014-12-01

    Conventional methods for source attribution of air pollution are based on measurement statistics (such as Positive Matrix Factorization) or sensitivity simulations with a chemical transport model (CTM). These methods generally ignore the nonlinear chemistry associated with the pollution formation or require unaffordable computational time. Here we use the adjoint of GEOS-Chem CTM at 0.25x0.3125 degree resolution to examine the sources contributing to the PM2.5 pollution over North China in winter 2013. We improved the model sulfate simulation by implementing the aqueous-phase oxidation of S(IV) by nitrogen dioxide. The adjoint results provide detailed source information at the model underlying grid resolution including source types and sectors. We show that PM2.5 pollution over Beijing and Baoding (Hebei) in winter was largely contributed by the large-scale residential and industrial burnings, and ammonia (NH3) emissions from agriculture activities. Nearly half of pollution was transported from outside of the city domains, and accumulated over 2-3 days. We also show under the current emission conditions, the PM2.5 concentrations over North China are more sensitive to NH3 emissions than NOx and SO2 emissions.

  17. Forward and adjoint simulations of seismic wave propagation on fully unstructured hexahedral meshes

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Komatitsch, Dimitri; Luo, Yang; Martin, Roland; Le Goff, Nicolas; Casarotti, Emanuele; Le Loher, Pieyre; Magnoni, Federica; Liu, Qinya; Blitz, Céline; Nissen-Meyer, Tarje; Basini, Piero; Tromp, Jeroen

    2011-08-01

    We present forward and adjoint spectral-element simulations of coupled acoustic and (an)elastic seismic wave propagation on fully unstructured hexahedral meshes. Simulations benefit from recent advances in hexahedral meshing, load balancing and software optimization. Meshing may be accomplished using a mesh generation tool kit such as CUBIT, and load balancing is facilitated by graph partitioning based on the SCOTCH library. Coupling between fluid and solid regions is incorporated in a straightforward fashion using domain decomposition. Topography, bathymetry and Moho undulations may be readily included in the mesh, and physical dispersion and attenuation associated with anelasticity are accounted for using a series of standard linear solids. Finite-frequency Fréchet derivatives are calculated using adjoint methods in both fluid and solid domains. The software is benchmarked for a layercake model. We present various examples of fully unstructured meshes, snapshots of wavefields and finite-frequency kernels generated by Version 2.0 'Sesame' of our widely used open source spectral-element package SPECFEM3D.

  18. Sources and processes contributing to nitrogen deposition: an adjoint model analysis applied to biodiversity hotspots worldwide.

    PubMed

    Paulot, Fabien; Jacob, Daniel J; Henze, Daven K

    2013-04-01

    Anthropogenic enrichment of reactive nitrogen (Nr) deposition is an ecological concern. We use the adjoint of a global 3-D chemical transport model (GEOS-Chem) to identify the sources and processes that control Nr deposition to an ensemble of biodiversity hotspots worldwide and two U.S. national parks (Cuyahoga and Rocky Mountain). We find that anthropogenic sources dominate deposition at all continental sites and are mainly regional (less than 1000 km) in origin. In Hawaii, Nr supply is controlled by oceanic emissions of ammonia (50%) and anthropogenic sources (50%), with important contributions from Asia and North America. Nr deposition is also sensitive in complicated ways to emissions of SO2, which affect Nr gas-aerosol partitioning, and of volatile organic compounds (VOCs), which affect oxidant concentrations and produce organic nitrate reservoirs. For example, VOC emissions generally inhibit deposition of locally emitted NOx but significantly increase Nr deposition downwind. However, in polluted boreal regions, anthropogenic VOC emissions can promote Nr deposition in winter. Uncertainties in chemical rate constants for OH + NO2 and NO2 hydrolysis also complicate the determination of source-receptor relationships for polluted sites in winter. Application of our adjoint sensitivities to the representative concentration pathways (RCPs) scenarios for 2010-2050 indicates that future decreases in Nr deposition due to NOx emission controls will be offset by concurrent increases in ammonia emissions from agriculture. PMID:23458244

  19. Adjoint S U (5 ) GUT model with T7 flavor symmetry

    NASA Astrophysics Data System (ADS)

    Arbeláez, Carolina; Cárcamo Hernández, A. E.; Kovalenko, Sergey; Schmidt, Iván

    2015-12-01

    We propose an adjoint S U (5 ) GUT model with a T7 family symmetry and an extra Z2⊗Z3⊗Z4⊗Z4'⊗Z12 discrete group that successfully describes the prevailing Standard Model fermion mass and mixing pattern. The observed hierarchy of the charged fermion masses and the quark mixing angles arises from the Z3⊗Z4⊗Z12 symmetry breaking, which occurs near the GUT scale. The light active neutrino masses are generated by type-I and type-III seesaw mechanisms mediated by the fermionic S U (5 ) singlet and the adjoint 24 -plet. The model predicts the effective Majorana neutrino mass parameter of neutrinoless double beta decay to be mβ β=4 and 50 meV for the normal and the inverted neutrino spectra, respectively. We construct several benchmark scenarios, which lead to S U (5 ) gauge coupling unification and are compatible with the known phenomenological constraints originating from the lightness of neutrinos, proton decay, dark matter, etc. These scenarios contain TeV-scale colored fields, which could give rise to a visible signal or be stringently constrained at the LHC.

  20. Seeking Energy System Pathways to Reduce Ozone Damage to Ecosystems through Adjoint-based Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Capps, S. L.; Pinder, R. W.; Loughlin, D. H.; Bash, J. O.; Turner, M. D.; Henze, D. K.; Percell, P.; Zhao, S.; Russell, M. G.; Hakami, A.

    2014-12-01

    Tropospheric ozone (O3) affects the productivity of ecosystems in addition to degrading human health. Concentrations of this pollutant are significantly influenced by precursor gas emissions, many of which emanate from energy production and use processes. Energy system optimization models could inform policy decisions that are intended to reduce these harmful effects if the contribution of precursor gas emissions to human health and ecosystem degradation could be elucidated. Nevertheless, determining the degree to which precursor gas emissions harm ecosystems and human health is challenging because of the photochemical production of ozone and the distinct mechanisms by which ozone causes harm to different crops, tree species, and humans. Here, the adjoint of a regional chemical transport model is employed to efficiently calculate the relative influences of ozone precursor gas emissions on ecosystem and human health degradation, which informs an energy system optimization. Specifically, for the summer of 2007 the Community Multiscale Air Quality (CMAQ) model adjoint is used to calculate the location- and sector-specific influences of precursor gas emissions on potential productivity losses for the major crops and sensitive tree species as well as human mortality attributable to chronic ozone exposure in the continental U.S. The atmospheric concentrations are evaluated with 12-km horizontal resolution with crop production and timber biomass data gridded similarly. These location-specific factors inform the energy production and use technologies selected in the MARKet ALlocation (MARKAL) model.

  1. Eguchi-Kawai reduction with one flavor of adjoint Möbius fermion

    NASA Astrophysics Data System (ADS)

    Cunningham, William; Giedt, Joel

    2016-02-01

    We study the single site lattice gauge theory of S U (N ) coupled to one Dirac flavor of fermion in the adjoint representation. We utilize Möbius fermions for this study, and accelerate the calculation with graphics processing units. Our Monte Carlo simulations indicate that for sufficiently large inverse 't Hooft coupling b =1 /g2N , and for N ≤10 the distribution of traced Polyakov loops has "fingers" that extend from the origin. However, in the massless case the distribution of eigenvalues of the untraced Polyakov loop becomes uniform at large N , indicating preservation of center symmetry in the thermodynamic limit. By contrast, for a large mass and large b , the distribution is highly nonuniform in the same limit, indicating spontaneous center symmetry breaking. These conclusions are confirmed by comparing to the quenched case, as well as by examining another observable based on the average value of the modulus of the traced Polyakov loop. The result of this investigation is that with massless adjoint fermions center symmetry is stabilized and the Eguchi-Kawai reduction should be successful; this is in agreement with most other studies.

  2. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. PMID:23909919

  3. Coupling of MASH-MORSE Adjoint Leakages with Space- and Time-Dependent Plume Radiation Sources

    SciTech Connect

    Slater, C.O.

    2001-04-20

    In the past, forward-adjoint coupling procedures in air-over-ground geometry have typically involved forward fluences arising from a point source a great distance from a target or vehicle system. Various processing codes were used to create localized forward fluence files that could be used to couple with the MASH-MORSE adjoint leakages. In recent years, radiation plumes that result from reactor accidents or similar incidents have been modeled by others, and the source space and energy distributions as a function of time have been calculated. Additionally, with the point kernel method, they were able to calculate in relatively quick fashion free-field radiation doses for targets moving within the fluence field or for stationary targets within the field, the time dependence for the latter case coming from the changes in position, shape, source strength, and spectra of the plume with time. The work described herein applies the plume source to the MASH-MORSE coupling procedure. The plume source replaces the point source for generating the forward fluences that are folded with MASH-MORSE adjoint leakages. Two types of source calculations are described. The first is a ''rigorous'' calculation using the TORT code and a spatially large air-over-ground geometry. For each time step desired, directional fluences are calculated and are saved over a predetermined region that encompasses a structure within which it is desired to calculate dose rates. Processing codes then create the surface fluences (which may include contributions from radiation sources that deposit on the roof or plateout) that will be coupled with the MASH-MORSE adjoint leakages. Unlike the point kernel calculations of the free-field dose rates, the TORT calculations in practice include the effects of ground scatter on dose rates and directional fluences, although the effects may be underestimated or overestimated because of the use of necessarily coarse mesh and quadrature in order to reduce computational

  4. Analytical solution for the advection-dispersion transport equation in layered media

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The advection-dispersion transport equation with first-order decay was solved analytically for multi-layered media using the classic integral transform technique (CITT). The solution procedure used an associated non-self-adjoint advection-diffusion eigenvalue problem that had the same form and coef...

  5. An improved method for estimation of Jupiter's gravity field using the Juno expected measurements, a trajectory estimation model, and an adjoint based thermal wind model

    NASA Astrophysics Data System (ADS)

    Galanti, E.; Finocchiaro, S.; Kaspi, Y.; Iess, L.

    2013-12-01

    The upcoming high precision measurements of the Juno flybys around Jupiter, have the potential of improving the estimation of Jupiter's gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be over a limited latitudinal and longitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially with regards to the Jovian wind structure and its depth at high latitudes. In this work we propose a new iterative method for the estimation of the Jupiter gravity field, using the Juno expected measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model together with an optimization procedure is used to obtain an initial solution of the gravitational moments. As upper limit constraints, the model applies the gravity harmonics obtained from a thermal wind model in which the winds are assumed to penetrate barotropicaly along the direction of the spin axis. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an adjoint optimization method, the optimal penetration depth of the winds is computed. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an uncertainties estimate, to be used as constraints for a new calculation of the gravity field. We test this method for several cases, some with zonal harmonics only, and some with the full gravity field including longitudinal variations that include the tesseral harmonics as well. The results show that using this method some of the gravitational moments are fitted better to the 'observed' ones, mainly due to the fact that the thermal wind model is taking into consideration the wind structure and depth

  6. New optimality criteria methods - Forcing uniqueness of the adjoint strains by corner-rounding at constraint intersections

    NASA Technical Reports Server (NTRS)

    Rozvany, G. I. N.; Sobieszczanski-Sobieski, J.

    1992-01-01

    In new, iterative continuum-based optimality criteria (COC) methods, the strain in the adjoint structure becomes non-unique if the number of active local constraints is greater than the number of design variables for an element. This brief note discusses the use of smooth envelope functions (SEFs) in overcoming economically computational problems caused by the above non-uniqueness.

  7. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  8. Full waveform seismic tomography of the Vrancea region using the adjoint method

    NASA Astrophysics Data System (ADS)

    Baron, J.; Danecek, P.; Morelli, A.; Tondi, R.

    2013-12-01

    The Vrancea region, at the south-eastern bend of the Carpathian Mountains, Romania, represents one of the most particular seismically active zones of Europe. Beside some shallow crustal seismicity spread across the whole Romanian territory, Vrancea is the place of intense seismicity with the presence of a cluster of intermediate-depth foci placed in a narrow NE-SW trending volume below 60km depth. The occurrence of strong earthquakes in the past has raised questions about the nature of this deep intra-continental seismicity and increased the interest in the geodynamics of this earthquake-prone area. The central issue for seismic risk assessment is whether this singular seismogenic volume is geodynamically coupled to the crust. Large-scale mantle seismic tomographic studies have revealed the presence of a narrow, almost vertical, high-velocity body in the upper mantle. So far, two main different geodynamical models have been proposed for the region: (1) A subduction-related process, and (2) more recently a delamination process. High-resolution seismic tomography could help to reveal more details in the subcrustal structural models and to constrain the properties of the Vrancea Seismogenic Zone. Previous efforts have relied on classical ray-theoretical travel-time tomography to model data from local permanent or temporary instruments. Recent developments in computational seismology as well as the availability of parallel computing now allow modelling of the entire seismogram in a consistent way. This enables us to potentially retrieve more information out of seismic waveforms and to keep the modelling more uniform. In this work we want to assess the information gain that can be obtained using an adjoint-based inversion scheme combined with a full 3D waveform modelling, with respect to ray theory based tomography for the Vrancea region. The study is done with a dataset of local earthquakes from the broadband data of the CALIXTO 1999 experiment. This dataset is

  9. Full-3D waveform tomography of Southern California crustal structure by using earthquake recordings and ambient noise Green's functions based on adjoint and scattering-integral methods

    NASA Astrophysics Data System (ADS)

    Lee, E.; Chen, P.; Jordan, T. H.; Maechling, P. J.; Denolle, M.; Beroza, G. C.

    2013-12-01

    We apply a unified methodology for seismic waveform analysis and inversions to Southern California. To automate the waveform selection processes, we developed a semi-automatic seismic waveform analysis algorithm for full-wave earthquake source parameters and tomographic inversions. The algorithm is based on continuous wavelet transforms, a topological watershed method, and a set of user-adjustable criteria to select usable waveform windows for full-wave inversions. The algorithm takes advantages of time-frequency representations of seismograms and is able to separate seismic phases in both time and frequency domains. The selected wave packet pairs between observed and synthetic waveforms are then used for extracting frequency-dependent phase and amplitude misfit measurements, which are used in our seismic source and structural inversions. Our full-wave waveform tomography uses the 3D SCEC Community Velocity Model Version 4.0 as initial model, a staggered-grid finite-difference code to simulate seismic wave propagations. The sensitivity (Fréchet) kernels are calculated based on the scattering integral and adjoint methods to iteratively improve the model. We use both earthquake recordings and ambient noise Green's functions, stacking of station-to-station correlations of ambient seismic noise, in our full-3D waveform tomographic inversions. To reduce errors of earthquake sources, the epicenters and source parameters of earthquakes used in our tomographic inversion are inverted by our full-wave CMT inversion method. Our current model shows many features that relate to the geological structures at shallow depth and contrasting velocity values across faults. The velocity perturbations could up to 45% with respect to the initial model in some regions and relate to some structures that do not exist in the initial model, such as southern Great Valley. The earthquake waveform misfits reduce over 70% and the ambient noise Green's function group velocity delay time variance

  10. Broad-search algorithms for the spacecraft trajectory design of Callisto-Ganymede-Io triple flyby sequences from 2024 to 2040, Part II: Lambert pathfinding and trajectory solutions

    NASA Astrophysics Data System (ADS)

    Lynam, Alfred E.

    2014-01-01

    Triple-satellite-aided capture employs gravity-assist flybys of three of the Galilean moons of Jupiter in order to decrease the amount of ΔV required to capture a spacecraft into Jupiter orbit. Similarly, triple flybys can be used within a Jupiter satellite tour to rapidly modify the orbital parameters of a Jovicentric orbit, or to increase the number of science flybys. In order to provide a nearly comprehensive search of the solution space of Callisto-Ganymede-Io triple flybys from 2024 to 2040, a third-order, Chebyshev's method variant of the p-iteration solution to Lambert's problem is paired with a second-order, Newton-Raphson method, time of flight iteration solution to the V∞-matching problem. The iterative solutions of these problems provide the orbital parameters of the Callisto-Ganymede transfer, the Ganymede flyby, and the Ganymede-Io transfer, but the characteristics of the Callisto and Io flybys are unconstrained, so they are permitted to vary in order to produce an even larger number of trajectory solutions. The vast amount of solution data is searched to find the best triple-satellite-aided capture window between 2024 and 2040.

  11. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. Communication: A reduced-space algorithm for the solution of the complex linear response equations used in coupled cluster damped response theory

    NASA Astrophysics Data System (ADS)

    Kauczor, Joanna; Norman, Patrick; Christiansen, Ove; Coriani, Sonia

    2013-12-01

    We present a reduced-space algorithm for solving the complex (damped) linear response equations required to compute the complex linear response function for the hierarchy of methods: coupled cluster singles, coupled cluster singles and iterative approximate doubles, and coupled cluster singles and doubles. The solver is the keystone element for the development of damped coupled cluster response methods for linear and nonlinear effects in resonant frequency regions.

  13. Adjoint-Based Methodology for Time-Dependent Optimal Control (AMTOC)

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail; Diskin, boris; Nishikawa, Hiroaki

    2012-01-01

    During the five years of this project, the AMTOC team developed an adjoint-based methodology for design and optimization of complex time-dependent flows, implemented AMTOC in a testbed environment, directly assisted in implementation of this methodology in the state-of-the-art NASA's unstructured CFD code FUN3D, and successfully demonstrated applications of this methodology to large-scale optimization of several supersonic and other aerodynamic systems, such as fighter jet, subsonic aircraft, rotorcraft, high-lift, wind-turbine, and flapping-wing configurations. In the course of this project, the AMTOC team has published 13 refereed journal articles, 21 refereed conference papers, and 2 NIA reports. The AMTOC team presented the results of this research at 36 international and national conferences, meeting and seminars, including International Conference on CFD, and numerous AIAA conferences and meetings. Selected publications that include the major results of the AMTOC project are enclosed in this report.

  14. Discrete Adjoint-Based Design for Unsteady Turbulent Flows On Dynamic Overset Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris

    2012-01-01

    A discrete adjoint-based design methodology for unsteady turbulent flows on three-dimensional dynamic overset unstructured grids is formulated, implemented, and verified. The methodology supports both compressible and incompressible flows and is amenable to massively parallel computing environments. The approach provides a general framework for performing highly efficient and discretely consistent sensitivity analysis for problems involving arbitrary combinations of overset unstructured grids which may be static, undergoing rigid or deforming motions, or any combination thereof. General parent-child motions are also accommodated, and the accuracy of the implementation is established using an independent verification based on a complex-variable approach. The methodology is used to demonstrate aerodynamic optimizations of a wind turbine geometry, a biologically-inspired flapping wing, and a complex helicopter configuration subject to trimming constraints. The objective function for each problem is successfully reduced and all specified constraints are satisfied.

  15. Supersonic wing and wing-body shape optimization using an adjoint formulation

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony

    1995-01-01

    This paper describes the implementation of optimization techniques based on control theory for wing and wing-body design of supersonic configurations. The work represents an extension of our earlier research in which control theory is used to devise a design procedure that significantly reduces the computational cost by employing an adjoint equation. In previous studies it was shown that control theory could be used toeviseransonic design methods for airfoils and wings in which the shape and the surrounding body-fitted mesh are both generated analytically, and the control is the mapping function. The method has also been implemented for both transonic potential flows and transonic flows governed by the Euler equations using an alternative formulation which employs numerically generated grids, so that it can treat more general configurations. Here results are presented for three-dimensional design cases subject to supersonic flows governed by the Euler equation.

  16. A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler

    1998-10-01

    The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.

  17. Adjoint-tomography Inversion of the Small-scale Surface Sedimentary Structures: Key Methodological Aspects

    NASA Astrophysics Data System (ADS)

    Kubina, Filip; Moczo, Peter; Kristek, Jozef; Michlik, Filip

    2016-04-01

    Adjoint tomography has proven an irreplaceable useful tool in exploring Earth's structure in the regional and global scales. It has not been widely applied for improving models of local surface sedimentary structures (LSSS) in numerical predictions of earthquake ground motion (EGM). Anomalous earthquake motions and corresponding damage in earthquakes are often due to site effects in local surface sedimentary basins. Because majority of world population is located atop surface sedimentary basins, it is important to predict EGM at these sites during future earthquakes. A major lesson learned from dedicated international tests focused on numerical prediction of EGM in LSSS is that it is hard to reach better agreement between data and synthetics without an improved structural model. If earthquake records are available for sites atop a LSSS it is natural to consider them for improving the structural model. Computationally efficient adjoint tomography might be a proper tool. A seismic wavefield in LSSS is relatively very complex due to diffractions, conversions, interference and often also resonant phenomena. In shallow basins, the first arrivals are not suitable for inversion due to almost vertical incidence and thus insufficient vertical resolution. Later wavefield consists mostly of local surface waves often without separated wave groups. Consequently, computed kernels are complicated and not suitable for inversion without pre-processing. The spatial complexity of a kernel can be dramatic in a typical situation with relatively low number of sources (local earthquakes) and surface receivers. This complexity can be simplified by directionally-dependent smoothing and spatially-dependent normalization that condition reasonable convergence. A multiscale approach seems necessary given the usual difference between the available and true models. Interestingly, only a successive inversion of μ and λ elastic moduli, and different scale sequences lead to good results.

  18. On the formulation of sea-ice models. Part 2: Lessons from multi-year adjoint sea-ice export sensitivities through the Canadian Arctic Archipelago

    NASA Astrophysics Data System (ADS)

    Heimbach, Patick; Menemenlis, Dimitris; Losch, Martin; Campin, Jean-Michel; Hill, Chris

    The adjoint of an ocean general circulation model is at the heart of the ocean state estimation system of the Estimating the Circulation and Climate of the Ocean (ECCO) project. As part of an ongoing effort to extend ECCO to a coupled ocean/sea-ice estimation system, a dynamic and thermodynamic sea-ice model has been developed for the Massachusetts Institute of Technology general circulation model (MITgcm). One key requirement is the ability to generate, by means of automatic differentiation (AD), tangent linear (TLM) and adjoint (ADM) model code for the coupled MITgcm ocean/sea-ice system. This second part of a two-part paper describes aspects of the adjoint model. The adjoint ocean and sea-ice model is used to calculate transient sensitivities of solid (ice and snow) freshwater export through Lancaster Sound in the Canadian Arctic Archipelago (CAA). The adjoint state provides a complementary view of the dynamics. In particular, the transient, multi-year sensitivity patterns reflect dominant pathways and propagation timescales through the CAA as resolved by the model, thus shedding light on causal relationships, in the model, across the Archipelago. The computational cost of inferring such causal relationships from forward model diagnostics alone would be prohibitive. The role of the exact model trajectory around which the adjoint is calculated (and therefore of the exactness of the adjoint) is exposed through calculations using free-slip vs no-slip lateral boundary conditions. Effective ice thickness, sea surface temperature, and precipitation sensitivities, are discussed in detail as examples of the coupled sea-ice/ocean and atmospheric forcing control space. To test the reliability of the adjoint, finite-difference perturbation experiments were performed for each of these elements and the cost perturbations were compared to those "predicted" by the adjoint. Overall, remarkable qualitative and quantitative agreement is found. In particular, the adjoint correctly

  19. A hybridizable discontinuous Galerkin method combined to a Schwarz algorithm for the solution of 3d time-harmonic Maxwell's equation

    NASA Astrophysics Data System (ADS)

    Li, Liang; Lanteri, Stéphane; Perrussel, Ronan

    2014-01-01

    A Schwarz-type domain decomposition method is presented for the solution of the system of 3d time-harmonic Maxwell's equations. We introduce a hybridizable discontinuous Galerkin (HDG) scheme for the discretization of the problem based on a tetrahedrization of the computational domain. The discrete system of the HDG method on each subdomain is solved by an optimized sparse direct (LU factorization) solver. The solution of the interface system in the domain decomposition framework is accelerated by a Krylov subspace method. The formulation and the implementation of the resulting DD-HDG (Domain Decomposed-Hybridizable Discontinuous Galerkin) method are detailed. Numerical results show that the resulting DD-HDG solution strategy has an optimal convergence rate and can save both CPU time and memory cost compared to a classical upwind flux-based DD-DG (Domain Decomposed-Discontinuous Galerkin) approach.

  20. Klein-Gordon Solutions on Non-Globally Hyperbolic Standard Static Spacetimes

    NASA Astrophysics Data System (ADS)

    Bullock, David M. A.

    2012-11-01

    We construct a class of solutions to the Cauchy problem of the Klein-Gordon equation on any standard static spacetime. Specifically, we have constructed solutions to the Cauchy problem based on any self-adjoint extension (satisfying a technical condition: "acceptability") of (some variant of) the Laplace-Beltrami operator defined on test functions in an L2-space of the static hypersurface. The proof of the existence of this construction completes and extends work originally done by Wald. Further results include: the uniqueness of these solutions; their support properties; the construction of the space of solutions and the energy and symplectic form on this space; an analysis of certain symmetries on the space of solutions; and various examples of this method, including the construction of a non-bounded below acceptable self-adjoint extension generating the dynamics.

  1. A New Method for Computing Three-Dimensional Capture Fraction in Heterogeneous Regional Systems using the MODFLOW Adjoint Code

    NASA Astrophysics Data System (ADS)

    Clemo, T. M.; Ramarao, B.; Kelly, V. A.; Lavenue, M.

    2011-12-01

    Capture is a measure of the impact of groundwater pumping upon groundwater and surface water systems. The computation of capture through analytical or numerical methods has been the subject of articles in the literature for several decades (Bredehoeft et al., 1982). Most recently Leake et al. (2010) described a systematic way to produce capture maps in three-dimensional systems using a numerical perturbation approach in which capture from streams was computed using unit rate pumping at many locations within a MODFLOW model. The Leake et al. (2010) method advances the current state of computing capture. A limitation stems from the computational demand required by the perturbation approach wherein days or weeks of computational time might be required to obtain a robust measure of capture. In this paper, we present an efficient method to compute capture in three-dimensional systems based upon adjoint states. The efficiency of the adjoint method will enable uncertainty analysis to be conducted on capture calculations. The USGS and INTERA have collaborated to extend the MODFLOW Adjoint code (Clemo, 2007) to include stream-aquifer interaction and have applied it to one of the examples used in Leake et al. (2010), the San Pedro Basin MODFLOW model. With five layers and 140,800 grid blocks per layer, the San Pedro Basin model, provided an ideal example data set to compare the capture computed from the perturbation and the adjoint methods. The capture fraction map produced from the perturbation method for the San Pedro Basin model required significant computational time to compute and therefore the locations for the pumping wells were limited to 1530 locations in layer 4. The 1530 direct simulations of capture require approximately 76 CPU hours. Had capture been simulated in each grid block in each layer, as is done in the adjoint method, the CPU time would have been on the order of 4 years. The MODFLOW-Adjoint produced the capture fraction map of the San Pedro Basin model

  2. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  3. A Conservative Galerkin-Characteristics Algorithm Combined with a Relaxation Scheme for Two Regions Nonlinear Solute Transport Problem in Porous Media

    SciTech Connect

    Mahmood, Mohammed Shuker

    2007-12-26

    Numerical scheme based on the modified method of characteristics with adjusted advection combined with a relaxation scheme for solving strongly nonlinear degenerate convection diffusion problem which arises in the contaminant transport in porous media with dual porosity. A series of computational experiments and comparisons with other solutions are carried out to illustrate the rate of convergence, the behavior and the capability of the scheme.

  4. Adjoint Sensitivity Analysis of Radiative Transfer Equation: Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Scattering Atmospheres in Thermal IR

    NASA Technical Reports Server (NTRS)

    Ustinov, E.

    1999-01-01

    Sensitivity analysis based on using of the adjoint equation of radiative transfer is applied to the case of atmospheric remote sensing in the thermal spectral region with non-negligeable atmospheric scattering.

  5. Development of an adjoint model of GRAPES-CUACE and its application in tracking influential haze source areas in north China

    NASA Astrophysics Data System (ADS)

    An, Xing Qin; Xian Zhai, Shi; Jin, Min; Gong, Sunling; Wang, Yu

    2016-06-01

    The aerosol adjoint module of the atmospheric chemical modeling system GRAPES-CUACE (Global-Regional Assimilation and Prediction System coupled with the CMA Unified Atmospheric Chemistry Environment) is constructed based on the adjoint theory. This includes the development and validation of the tangent linear and the adjoint models of the three parts involved in the GRAPES-CUACE aerosol module: CAM (Canadian Aerosol Module), interface programs that connect GRAPES and CUACE, and the aerosol transport processes that are embedded in GRAPES. Meanwhile, strict mathematical validation schemes for the tangent linear and the adjoint models are implemented for all input variables. After each part of the module and the assembled tangent linear and adjoint models is verified, the adjoint model of the GRAPES-CUACE aerosol is developed and used in a black carbon (BC) receptor-source sensitivity analysis to track influential haze source areas in north China. The sensitivity of the average BC concentration over Beijing at the highest concentration time point (referred to as the Objective Function) is calculated with respect to the BC amount emitted over the Beijing-Tianjin-Hebei region. Four types of regions are selected based on the administrative division or the sensitivity coefficient distribution. The adjoint sensitivity results are then used to quantify the effect of reducing the emission sources at different time intervals over different regions. It is indicated that the more influential regions (with relatively larger sensitivity coefficients) do not necessarily correspond to the administrative regions. Instead, the influence per unit area of the sensitivity selected regions is greater. Therefore, controlling the most influential regions during critical time intervals based on the results of the adjoint sensitivity analysis is much more efficient than controlling administrative regions during an experimental time period.

  6. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson-Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    NASA Astrophysics Data System (ADS)

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson-Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  7. A new Green's function Monte Carlo algorithm for the solution of the two-dimensional nonlinear Poisson–Boltzmann equation: Application to the modeling of the communication breakdown problem in space vehicles during re-entry

    SciTech Connect

    Chatterjee, Kausik; Roadcap, John R.; Singh, Surendra

    2014-11-01

    The objective of this paper is the exposition of a recently-developed, novel Green's function Monte Carlo (GFMC) algorithm for the solution of nonlinear partial differential equations and its application to the modeling of the plasma sheath region around a cylindrical conducting object, carrying a potential and moving at low speeds through an otherwise neutral medium. The plasma sheath is modeled in equilibrium through the GFMC solution of the nonlinear Poisson–Boltzmann (NPB) equation. The traditional Monte Carlo based approaches for the solution of nonlinear equations are iterative in nature, involving branching stochastic processes which are used to calculate linear functionals of the solution of nonlinear integral equations. Over the last several years, one of the authors of this paper, K. Chatterjee has been developing a philosophically-different approach, where the linearization of the equation of interest is not required and hence there is no need for iteration and the simulation of branching processes. Instead, an approximate expression for the Green's function is obtained using perturbation theory, which is used to formulate the random walk equations within the problem sub-domains where the random walker makes its walks. However, as a trade-off, the dimensions of these sub-domains have to be restricted by the limitations imposed by perturbation theory. The greatest advantage of this approach is the ease and simplicity of parallelization stemming from the lack of the need for iteration, as a result of which the parallelization procedure is identical to the parallelization procedure for the GFMC solution of a linear problem. The application area of interest is in the modeling of the communication breakdown problem during a space vehicle's re-entry into the atmosphere. However, additional application areas are being explored in the modeling of electromagnetic propagation through the atmosphere/ionosphere in UHF/GPS applications.

  8. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  9. Numerical Asymptotic Solutions Of Differential Equations

    NASA Technical Reports Server (NTRS)

    Thurston, Gaylen A.

    1992-01-01

    Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.

  10. Variational data assimilation with a semi-Lagrangian semi-implicit global shallow-water equation model and its adjoint

    NASA Technical Reports Server (NTRS)

    Li, Y.; Navon, I. M.; Courtier, P.; Gauthier, P.

    1993-01-01

    An adjoint model is developed for variational data assimilation using the 2D semi-Lagrangian semi-implicit (SLSI) shallow-water equation global model of Bates et al. with special attention being paid to the linearization of the interpolation routines. It is demonstrated that with larger time steps the limit of the validity of the tangent linear model will be curtailed due to the interpolations, especially in regions where sharp gradients in the interpolated variables coupled with strong advective wind occur, a synoptic situation common in the high latitudes. This effect is particularly evident near the pole in the Northern Hemisphere during the winter season. Variational data assimilation experiments of 'identical twin' type with observations available only at the end of the assimilation period perform well with this adjoint model. It is confirmed that the computational efficiency of the semi-Lagrangian scheme is preserved during the minimization process, related to the variational data assimilation procedure.

  11. Adjoint free four-dimensional variational data assimilation for a storm surge model of the German North Sea

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangyang; Mayerle, Roberto; Xing, Qianguo; Fernández Jaramillo, José Manuel

    2016-08-01

    In this paper, a data assimilation scheme based on the adjoint free Four-Dimensional Variational(4DVar) method is applied to an existing storm surge model of the German North Sea. To avoid the need of an adjoint model, an ensemble-like method to explicitly represent the linear tangent equation is adopted. Results of twin experiments have shown that the method is able to recover the contaminated low dimension model parameters to their true values. The data assimilation scheme was applied to a severe storm surge event which occurred in the North Sea in December 5, 2013. By adjusting wind drag coefficient, the predictive ability of the model increased significantly. Preliminary experiments have shown that an increase in the predictive ability is attained by narrowing the data assimilation time window.

  12. Coupled forward-adjoint Monte Carlo simulation of spatial-angular light fields to determine optical sensitivity in turbid media.

    PubMed

    Gardner, Adam R; Hayakawa, Carole K; Venugopalan, Vasan

    2014-06-01

    We present a coupled forward-adjoint Monte Carlo (cFAMC) method to determine the spatially resolved sensitivity distributions produced by optical interrogation of three-dimensional (3-D) tissue volumes. We develop a general computational framework that computes the spatial and angular distributions of the forward-adjoint light fields to provide accurate computations in mesoscopic tissue volumes. We provide full computational details of the cFAMC method and provide results for low- and high-scattering tissues probed using a single pair of optical fibers. We examine the effects of source-detector separation and orientation on the sensitivity distributions and consider how the degree of angular discretization used in the 3-D tissue model impacts the accuracy of the resulting absorption sensitivity profiles. We discuss the value of such computations for optical imaging and the design of optical measurements. PMID:24972356

  13. Adjoint free four-dimensional variational data assimilation for a storm surge model of the German North Sea

    NASA Astrophysics Data System (ADS)

    Zheng, Xiangyang; Mayerle, Roberto; Xing, Qianguo; Fernández Jaramillo, José Manuel

    2016-06-01

    In this paper, a data assimilation scheme based on the adjoint free Four-Dimensional Variational(4DVar) method is applied to an existing storm surge model of the German North Sea. To avoid the need of an adjoint model, an ensemble-like method to explicitly represent the linear tangent equation is adopted. Results of twin experiments have shown that the method is able to recover the contaminated low dimension model parameters to their true values. The data assimilation scheme was applied to a severe storm surge event which occurred in the North Sea in December 5, 2013. By adjusting wind drag coefficient, the predictive ability of the model increased significantly. Preliminary experiments have shown that an increase in the predictive ability is attained by narrowing the data assimilation time window.

  14. Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2010-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  15. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  16. Coupled forward-adjoint Monte Carlo simulation of spatial-angular light fields to determine optical sensitivity in turbid media

    PubMed Central

    Gardner, Adam R.; Hayakawa, Carole K.; Venugopalan, Vasan

    2014-01-01

    Abstract. We present a coupled forward-adjoint Monte Carlo (cFAMC) method to determine the spatially resolved sensitivity distributions produced by optical interrogation of three-dimensional (3-D) tissue volumes. We develop a general computational framework that computes the spatial and angular distributions of the forward-adjoint light fields to provide accurate computations in mesoscopic tissue volumes. We provide full computational details of the cFAMC method and provide results for low- and high-scattering tissues probed using a single pair of optical fibers. We examine the effects of source-detector separation and orientation on the sensitivity distributions and consider how the degree of angular discretization used in the 3-D tissue model impacts the accuracy of the resulting absorption sensitivity profiles. We discuss the value of such computations for optical imaging and the design of optical measurements. PMID:24972356

  17. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  18. Source Attribution of Health Benefits from Air Pollution Abatement in Canada and the United States: An Adjoint Sensitivity Analysis

    PubMed Central

    Pappin, Amanda Joy

    2013-01-01

    Background: Decision making regarding air pollution can be better informed if air quality impacts are traced back to individual emission sources. Adjoint or backward sensitivity analysis is a modeling tool that can achieve this goal by allowing for quantification of how emissions from sources in different locations influence human health metrics. Objectives: We attributed short-term mortality (valuated as an overall “health benefit”) in Canada and the United States to anthropogenic nitrogen oxides (NOx) and volatile organic compound (VOC) emissions across North America. Methods: We integrated epidemiological data derived from Canadian and U.S. time-series studies with the adjoint of an air quality model and also estimated influences of anthropogenic emissions at each location on nationwide health benefits. Results: We found significant spatiotemporal variability in estimated health benefit influences of NOx and VOC emission reductions on Canada and U.S. mortality. The largest estimated influences on Canada (up to $250,000/day) were from emissions originating in the Quebec City–Windsor Corridor, where population centers are concentrated. Estimated influences on the United States tend to be widespread and more substantial owing to both larger emissions and larger populations. The health benefit influences calculated using 24-hr average ozone (O3) concentrations are lower in magnitude than estimates calculated using daily 1-hr maximum O3 concentrations. Conclusions: Source specificity of the adjoint approach provides valuable information for guiding air quality decision making. Adjoint results suggest that the health benefits of reducing NOx and VOC emissions are substantial and highly variable across North America. PMID:23434744

  19. Estimates of black carbon emissions in the western United States using the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Mao, Y. H.; Li, Q. B.; Henze, D. K.; Jiang, Z.; Jones, D. B. A.; Kopacz, M.; He, C.; Qi, L.; Gao, M.; Hao, W.-M.; Liou, K.-N.

    2015-07-01

    We estimate black carbon (BC) emissions in the western United States for July-September 2006 by inverting surface BC concentrations from the Interagency Monitoring of Protected Visual Environments (IMPROVE) network using a global chemical transport model (GEOS-Chem) and its adjoint. Our best estimate of the BC emissions is 49.9 Gg at 2° × 2.5° (a factor of 2.1 increase) and 47.3 Gg at 0.5° × 0.667° (1.9 times increase). Model results now capture the observed major fire episodes with substantial bias reductions ( 35 % at 2° × 2.5° and 15 % at 0.5° × 0.667°). The emissions are 20-50 % larger than those from our earlier analytical inversions (Mao et al., 2014). The discrepancy is especially drastic in the partitioning of anthropogenic versus biomass burning emissions. The August biomass burning BC emissions are 4.6-6.5 Gg and anthropogenic BC emissions 8.6-12.8 Gg, varying with the model resolution, error specifications, and subsets of observations used. On average both anthropogenic and biomass burning emissions in the adjoint inversions increase 2-fold relative to the respective {a priori} emissions, in distinct contrast to the halving of the anthropogenic and tripling of the biomass burning emissions in the analytical inversions. We attribute these discrepancies to the inability of the adjoint inversion system, with limited spatiotemporal coverage of the IMPROVE observations, to effectively distinguish collocated anthropogenic and biomass burning emissions on model grid scales. This calls for concurrent measurements of other tracers of biomass burning and fossil fuel combustion (e.g., carbon monoxide and carbon isotopes). We find that the adjoint inversion system as is has sufficient information content to constrain the total emissions of BC on the model grid scales.

  20. Assessing the Impact of Advanced Satellite Observations in the NASA GEOS-5 Forecast System Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ron; Liu, Emily; Sienkiewicz, Meta

    2011-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. In this talk, we present results from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. The tool has been running in various off-line configurations for some time, and is scheduled to run as a regular part of the real-time forecast suite beginning in autumn 20 I O. We focus on the impacts of the newest components of the satellite observing system, including AIRS, IASI and GPS. For AIRS and IASI, it is shown that the vast majority of the channels assimilated have systematic positive impacts (of varying magnitudes), although some channels degrade the forecast. Of the latter, most are moisture-sensitive or near-surface channels. The impact of GPS observations in the southern hemisphere is found to be a considerable overall benefit to the system. In addition, the spatial variability of observation impacts reveals coherent patterns of positive and negative impacts that may point to deficiencies in the use of certain observations over, for example, specific surface types. When performed in conjunction with selected observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies appears to pose a major challenge for optimizing the use of the current observational network and

  1. Development and application of the WRFPLUS-Chem online chemistry adjoint and WRFDA-Chem assimilation system

    NASA Astrophysics Data System (ADS)

    Guerrette, J. J.; Henze, D. K.

    2015-06-01

    Here we present the online meteorology and chemistry adjoint and tangent linear model, WRFPLUS-Chem (Weather Research and Forecasting plus chemistry), which incorporates modules to treat boundary layer mixing, emission, aging, dry deposition, and advection of black carbon aerosol. We also develop land surface and surface layer adjoints to account for coupling between radiation and vertical mixing. Model performance is verified against finite difference derivative approximations. A second-order checkpointing scheme is created to reduce computational costs and enable simulations longer than 6 h. The adjoint is coupled to WRFDA-Chem, in order to conduct a sensitivity study of anthropogenic and biomass burning sources throughout California during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) field campaign. A cost-function weighting scheme was devised to reduce the impact of statistically insignificant residual errors in future inverse modeling studies. Results of the sensitivity study show that, for this domain and time period, anthropogenic emissions are overpredicted, while wildfire emission error signs vary spatially. We consider the diurnal variation in emission sensitivities to determine at what time sources should be scaled up or down. Also, adjoint sensitivities for two choices of land surface model (LSM) indicate that emission inversion results would be sensitive to forward model configuration. The tools described here are the first step in conducting four-dimensional variational data assimilation in a coupled meteorology-chemistry model, which will potentially provide new constraints on aerosol precursor emissions and their distributions. Such analyses will be invaluable to assessments of particulate matter health and climate impacts.

  2. Adjoint tomography of crust and upper-mantle structure beneath Continental China

    NASA Astrophysics Data System (ADS)

    Chen, M.; Niu, F.; Liu, Q.; Tromp, J.

    2013-12-01

    Four years of regional earthquake recordings from 1,869 seismic stations are used for high-resolution and high-fidelity seismic imaging of the crust and upper-mantle structure beneath Continental China. This unprecedented high-density dataset is comprised of seismograms recorded by the China Earthquake Administration Array (CEArray), NorthEast China Extended SeiSmic Array (NECESSArray), INDEPTH-IV Array, F-net and other global and regional seismic networks, and involves 1,326,384 frequency-dependent phase measurements. Adjoint tomography is applied to this unprecedented dataset, aiming to resolve detailed 3D maps of compressional and shear wavespeeds, and radial anisotropy. Contrary to traditional ray-theory based tomography, adjoint tomography takes into account full 3D wave propagation effects and off-ray-path sensitivity. In our implementation, it utilizes a spectral-element method for precise wave propagation simulations. The tomographic method starts with a 3D initial model that combines smooth radially anisotropic mantle model S362ANI and 3D crustal model Crust2.0. Traveltime and amplitude misfits are minimized iteratively based on a conjugate gradient method, harnessing 3D finite-frequency kernels computed for each updated 3D model. After 17 iterations, our inversion reveals strong correlations of 3D wavespeed heterogeneities in the crust and upper mantle with surface tectonic units, such as the Himalaya Block, the Tibetan Plateau, the Tarim Basin, the Ordos Block, and the South China Block. Narrow slab features emerge from the smooth initial model above the transition zone beneath the Japan, Ryukyu, Philippine, Izu-Bonin, Mariana and Andaman arcs. 3D wavespeed variations appear comparable to or much sharper than in high-frequency P-and S-wave models from previous studies. Moreover our results include new information, such as 3D variations of radial anisotropy and the Vp/Vs ratio, which are expected to shed new light to the composition, thermal state, flow

  3. High-resolution array imaging using teleseismic converted waves based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Chen, C.

    2011-12-01

    Seismic coda waves and converted phases have been used extensively to image detailed subsurface structures underneath seismic arrays, based on methods such as receiver functions, Kirchhoff migration and generalized Radon transform (GRT). Utilizing the same coda and converted waves, we propose to image both discontinuity interfaces and 3D velocity anomalies by combining full numerical simulations of wave propagation with adjoint methods recently adopted in global and regional tomography inversions. The `sensitivities' of these coda/converted waves to density, P and S velocities are calculated based on the interaction of the forward wave field that produces the main P phase, and the adjoint wave field generated by injecting the coda/converted phases at array stations as virtual sources, similar to the computation of isochrons in previous techniques. The density kernels generally highlight discontinuity interfaces and sharp velocity contrasts, while P and S velocity kernels provide hints to the update of volumetric velocity structures. The application of numerical solvers also allows the incorporation of 3D regional tomography models as background velocity models, providing better focusing of velocity anomalies. We show the feasibility of this technique on a synthetic case built based on the imaging geometry for Slave craton in the northwestern Canadian Shield by the POLARIS broadband seismic network. The main challenge of this technique lies in reproducing the forward wave field generated by tele-seismic sources in a limited simulation domain encompassing only local heterogeneous structures underneath array receivers. For simple homogeneous and layer-over-half-space background models, this can be solved by setting the incoming plane waves as initial conditions based on analytical formulae. For more sophisticated background models, a hybrid spectral-element solver is implemented by defining a fictitious boundary encompassing all local heterogeneities within the

  4. Adjoint-based computation of U.S. nationwide ozone exposure isopleths

    NASA Astrophysics Data System (ADS)

    Ashok, Akshay; Barrett, Steven R. H.

    2016-05-01

    Population exposure to daily maximum ozone is associated with an increased risk of premature mortality, and efforts to mitigate these impacts involve reducing emissions of nitrogen oxides (NOx) and volatile organic compounds (VOCs). We quantify the dependence of U.S. national exposure to annually averaged daily maximum ozone on ambient VOC and NOx concentrations through ozone exposure isopleths, developed using emissions sensitivities from the adjoint of the GEOS-Chem air quality model for 2006. We develop exposure isopleths for all locations within the contiguous US and derive metrics based on the isopleths that quantify the impact of emissions on national ozone exposure. This work is the first to create ozone exposure isopleths using adjoint sensitivities and at a large scale. We find that across the US, 29% of locations experience VOC-limited conditions (where increased NOx emissions lower ozone) during 51% of the year on average. VOC-limited conditions are approximately evenly distributed diurnally and occur more frequently during the fall and winter months (67% of the time) than in the spring and summer (37% of the time). The VOC/NOx ratio of the ridge line on the isopleth diagram (denoting a local maximum in ozone exposure with respect to NOx concentrations) is 9.2 ppbC/ppb on average across grid cells that experience VOC-limited conditions and 7.9, 10.1 and 6.7 ppbC/ppb at the three most populous US cities of New York, Los Angeles and Chicago, respectively. Emissions that are ozone exposure-neutral during VOC-limited exposure conditions result in VOC/NOx concentration ratios of 0.63, 1.61 and 0.72 ppbC/ppb at each of the three US cities respectively, and between 0.01 and 1.91 ppbC/ppb at other locations. The sensitivity of national ozone exposure to NOx and VOC emissions is found to be highest near major cities in the US. Together, this information can be used to assess the effectiveness of NOx and VOC emission reductions on mitigating ozone exposure in the

  5. High-resolution mapping of sources contributing to urban air pollution using adjoint sensitivity analysis: benzene and diesel black carbon.

    PubMed

    Bastien, Lucas A J; McDonald, Brian C; Brown, Nancy J; Harley, Robert A

    2015-06-16

    The adjoint of the Community Multiscale Air Quality (CMAQ) model at 1 km horizontal resolution is used to map emissions that contribute to ambient concentrations of benzene and diesel black carbon (BC) in the San Francisco Bay area. Model responses of interest include population-weighted average concentrations for three highly polluted receptor areas and the entire air basin. We consider both summer (July) and winter (December) conditions. We introduce a novel approach to evaluate adjoint sensitivity calculations that complements existing methods. Adjoint sensitivities to emissions are found to be accurate to within a few percent, except at some locations associated with large sensitivities to emissions. Sensitivity of model responses to emissions is larger in winter, reflecting weaker atmospheric transport and mixing. The contribution of sources located within each receptor area to the same receptor's air pollution burden increases from 38-74% in summer to 56-85% in winter. The contribution of local sources is higher for diesel BC (62-85%) than for benzene (38-71%), reflecting the difference in these pollutants' atmospheric lifetimes. Morning (6-9am) and afternoon (4-7 pm) commuting-related emissions dominate region-wide benzene levels in winter (14 and 25% of the total response, respectively). In contrast, afternoon rush hour emissions do not contribute significantly in summer. Similar morning and afternoon peaks in sensitivity to emissions are observed for the BC response; these peaks are shifted toward midday because most diesel truck traffic occurs during off-peak hours. PMID:26001097

  6. On a Time-Space Operator (and other Non-Self-Adjoint Operators) for Observables in QM and QFT

    NASA Astrophysics Data System (ADS)

    Recami, Erasmo; Zamboni-Rached, Michel; Licata, Ignazio

    The aim of this paper is to show the possible significance, and usefulness, of various non-self-adjoint operators for suitable Observables in non-relativistic and relativistic quantum mechanics (QM), and in quantum electrodynamics. More specifically, this work deals with: (i) the Hermitian (but not self-adjoint) Time operator in non-relativistic QM and in quantum electrodynamics; (ii) idem, the introduction of Time and Space operators; and (iii) the problem of the four-position and four-momentum operators, each one with its Hermitian and anti-Hermitian parts, for relativistic spin-zero particles. Afterwards, other physical applications of non-self-adjoint (and even non-Hermitian) operators are briefly discussed. We mention how non-Hermitian operators can indeed be used in physics [as it was done, elsewhere, for describing Unstable States]; and some considerations are added on the cases of the nuclear optical potential, of quantum dissipation, and in particular of an approach to the measurement problem in QM in terms of a chronon. This paper is largely based on work developed, along the years, in collaboration with V.S. Olkhovsky, and, in smaller parts, with P. Smrz, with R.H.A. Farias, and with S.P. Maydanyuk.

  7. Investigating troposhpere-stratosphere coupling during the southern hemisphere sudden stratospheric warming using an adjoint model.

    NASA Astrophysics Data System (ADS)

    Holdaway, D.; Coy, L.

    2015-12-01

    In September 2002 a major sudden stratospheric warming (SSW) occurred in the southern hemisphere. Although numerous SSWs have been observed in the northern hemisphere, this remains the only recorded major SSW in the southern hemisphere. Much debate has focused on this unique event and the causes, even resulting in a special issue of the Journal of Atmospheric Science. In this work we use the adjoint of NASA's Goddard Earth Observing System version 5 (GEOS-5) to investigate sensitivity to initial conditions during the onset of the 2002 SSW. The adjoint model provides a framework for propagating gradients with respect to the model state backwards in time. As such it is used to reveal aspects of the model initial conditions that have the biggest impact on the temperature in the stratosphere during the warming. The adjoint model reveals a large sensitivity over the southern Atlantic ocean and in the troposphere. This reinforces previous studies that attributed the SSW to a blocking ridge in this region. By converting sensitivity to perturbations it is shown that relatively small localized tropospheric perturbations to winds and temperature can be transported to the stratosphere and have a large impact on the SSW.

  8. Development of a High-Order Space-Time Matrix-Free Adjoint Solver

    NASA Technical Reports Server (NTRS)

    Ceze, Marco A.; Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    The growth in computational power and algorithm development in the past few decades has granted the science and engineering community the ability to simulate flows over complex geometries, thus making Computational Fluid Dynamics (CFD) tools indispensable in analysis and design. Currently, one of the pacing items limiting the utility of CFD for general problems is the prediction of unsteady turbulent ows.1{3 Reynolds-averaged Navier-Stokes (RANS) methods, which predict a time-invariant mean flowfield, struggle to provide consistent predictions when encountering even mild separation, such as the side-of-body separation at a wing-body junction. NASA's Transformative Tools and Technologies project is developing both numerical methods and physical modeling approaches to improve the prediction of separated flows. A major focus of this e ort is efficient methods for resolving the unsteady fluctuations occurring in these flows to provide valuable engineering data of the time-accurate flow field for buffet analysis, vortex shedding, etc. This approach encompasses unsteady RANS (URANS), large-eddy simulations (LES), and hybrid LES-RANS approaches such as Detached Eddy Simulations (DES). These unsteady approaches are inherently more expensive than traditional engineering RANS approaches, hence every e ort to mitigate this cost must be leveraged. Arguably, the most cost-effective approach to improve the efficiency of unsteady methods is the optimal placement of the spatial and temporal degrees of freedom (DOF) using solution-adaptive methods.

  9. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-06-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  10. Application to MISR Land Products of an RPV Model Inversion Package Using Adjoint and Hessian Codes

    NASA Astrophysics Data System (ADS)

    Lavergne, T.; Kaminski, T.; Pinty, B.; Taberner, M.; Gobron, N.; Verstraete, M. M.; Vossbeck, M.; Widlowski, J.-L.; Giering, R.

    The capability of the non-linear Rahman-Pinty-Verstraete RPV model to 1 accurately fit a large variety of Bidirectional Reflectance Factor BRF fields and 2 return parameter values of interest for land surface applications motivate the development of a computer efficient inversion package The present paper describes such a package based on the 3 and 4 parameter versions of the RPV model This software environment implements the adjoint code generated using automatic differentiation techniques of the cost function This cost function itself balances two main contributions reflecting 1 the a priori knowledge on the model parameter values and 2 BRF uncertainties together with the requirement to minimize the mismatch between the measurements and the RPV simulations The individual weights of these contributions are specified notably via covariance matrices of the uncertainties in the a priori knowledge on the model parameters and the observations This package also reports on the probability density functions of the retrieved model parameter values that thus permit the user to evaluate the a posteriori uncertainties on these retrievals This is achieved by evaluating the Hessian of the cost function at its minimum Results from a variety of tests are shown in order to document and analyze software performance against complex synthetic BRF fields simulated by radiation transfer models as well as against actual MISR-derived surface BRF products

  11. Adjoint-based optimal control for black-box simulators enabled by model calibration

    NASA Astrophysics Data System (ADS)

    Chen, Han; Wang, Qiqi; Klie, Hector

    2013-11-01

    Many simulations are performed using legacy code that are difficult to modify, or commercial software without available source code. Such ``black-box'' simulator often solves a partial differential equation involving some unknown parameters, functions or discretization methods. Optimal control for black-box simulators can be performed using gradient-free methods, but these methods can be computationally expensive when the controls are high dimensional. We aim at developing a more efficient optimization methodology for black-box simulations by first inferring and calibrating a ``twin model'' of the black-box simulator. The twin model is an open-box model that mirrors the behavior of the black-box simulation using data assimilation techniques. We then apply adjoint-based optimal control to the calibrated twin model. This method is applied to a 1D Buckley-Leverett equation solver, and a black-box multi-phase porous media flow solver PSIM. Special thanks to the support from the subsurface technology group of ConocoPhillips.

  12. Large-N reduction in QCD-like theories with massive adjoint fermions

    SciTech Connect

    Azeyanagi, Tatsuo; Hanada, Masanori; Unsal, Mithat; Yacoby, Ran; /Weizmann Inst.

    2010-08-26

    Large-N QCD with heavy adjoint fermions emulates pure Yang-Mills theory at long distances. We study this theory on a four- and three-torus, and analytically argue the existence of a large-small volume equivalence. For any finite mass, center symmetry unbroken phase exists at sufficiently small volume and this phase can be used to study the large-volume limit through the Eguchi-Kawai equivalence. A finite temperature version of volume independence implies that thermodynamics on R3 x S1 can be studied via a unitary matrix quantum mechanics on S1, by varying the temperature. To confirm this non-perturbatively, we numerically study both zero- and one-dimensional theories by using Monte-Carlo simulation. Order of finite-N corrections turns out to be 1/N. We introduce various twisted versions of the reduced QCD which systematically suppress finite-N corrections. Using a twisted model, we observe the confinement/deconfinement transition on a 1{sup 3} x 2-lattice. The result agrees with large volume simulations of Yang-Mills theory. We also comment that the twisted model can serve as a non-perturbative formulation of the non-commutative Yang-Mills theory.

  13. Adjoint inverse modeling of CO emissions over Eastern Asia using four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Yumimoto, Keiya; Uno, Itsushi

    We developed a four-dimensional variational (4DVAR) data assimilation system for a regional chemical transport model (CTM). In this study, we applied it to inverse modeling of CO emissions in the eastern Asia during April 2001 and demonstrated the feasibility of our assimilation system. Three ground-based observations were used for data assimilation. Assimilated results showed better agreement with observations; they reduced the RMS difference by 16-27%. Observations obtained on board the R/V Ronald H. Brown were used for independent validation of the assimilated results. The CO emissions over industrialized east central China between Shanghai and Beijing were increased markedly by the assimilation. The results show that the annual anthropogenic (fossil and biofuel combustion) CO emissions over China are 147 Tg. Sensitivity analyses using the adjoint model indicate that the high CO concentration measured on 17 April at Rishiri, Japan (which the assimilation was unable to reproduce) originated in Russia or had traveled from outside the Asian region (e.g. Europe).

  14. Focus point gauge mediation with incomplete adjoint messengers and gauge coupling unification

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Gautam; Yanagida, Tsutomu T.; Yokozaki, Norimi

    2015-10-01

    As the mass limits on supersymmetric particles are gradually pushed to higher values due to their continuing non-observation at the CERN LHC, looking for focus point regions in the supersymmetric parameter space, which shows considerably reduced fine-tuning, is increasingly more important than ever. We explore this in the context of gauge mediated supersymmetry breaking with messengers transforming in the adjoint representation of the gauge group, namely, octet of color SU(3) and triplet of weak SU(2). A distinctive feature of this scenario is that the focus point is achieved by fixing a single combination of parameters in the messenger sector, which is invariant under the renormalization group evolution. Because of this invariance, the focus point behavior is well under control once the relevant parameters are fixed by a more fundamental theory. The observed Higgs boson mass is explained with a relatively mild fine-tuning Δ = 60- 150. Interestingly, even in the presence of incomplete messenger multiplets of the SU(5) GUT group, the gauge couplings still unify perfectly, but at a scale which is one or two orders of magnitude above the conventional GUT scale. Because of this larger unification scale, the colored Higgs multiplets become too heavy to trigger proton decay at a rate larger than the experimentally allowed limit.

  15. One-loop test of free SU( N ) adjoint model holography

    NASA Astrophysics Data System (ADS)

    Bae, Jin-Beom; Joung, Euihun; Lal, Shailesh

    2016-04-01

    We consider the holographic duality where the CFT side is given by SU( N ) adjoint free scalar field theory. Compared to the vector models, the set of single trace operators is immensely extended so that the corresponding AdS theory also contains infinitely many massive higher spin fields on top of the massless ones. We compute the one-loop vacuum energy of these AdS fields to test this duality at the subleading order in large N expansion. The determination of the bulk vacuum energy requires a proper scheme to sum up the infinitely many contributions. For that, we develop a new method and apply it first to calculate the vacuum energies for the first few `Regge trajectories' in AdS4 and AdS5 . In considering the full vacuum energy of AdS theory dual to a matrix model CFT, we find that there exist more than one available prescriptions for the one-loop vacuum energy. Taking a particular prescription, we determine the full vacuum energy of the AdS5 theory, whereas the AdS4 calculation still remains technically prohibitive. This result shows that the full vacuum energy of the AdS5 theory coincides with minus of the free energy of a single scalar field on the boundary. This is analogous to the O( N ) vector model case, hence suggests an interpretation of the positive shift of the bulk coupling constant, i.e. from N 2 - 1 to N 2.

  16. Adjoint-based optimization for the understanding of the aerodynamics of a flapping plate

    NASA Astrophysics Data System (ADS)

    Wei, Mingjun; Xu, Min

    2015-11-01

    An adjoint-based optimization is applied on a rigid flapping plate and a flexible flapping plate for drag reduction and for propulsive efficiency. Non-cylindrical calculus is introduced to handle the moving boundary. The rigid plate has a combined plunging and pitching motion with incoming flow, the control parameter is the phase delay which is considered first as a constant then as an arbitrary time-varying function. The optimal controls with different cost functions provide different strategies to reach maximum drag reduction or propulsive efficiency. The flexible plate has plunging, pitching, and deformation which is defined by the first two natural modes. With the same optimization goals, the control is instead the amplitude and phase delay of the pitching, the first eigen mode, and the second eigen mode. Similar analyses are taken to understand the conditions for drag reduction and propulsive efficiency when flexibility is involved. It is also shown that the flexibility plays a more important role at lower Reynolds number. Supported by AFOSR.

  17. Improving NO(x) cap-and-trade system with adjoint-based emission exchange rates.

    PubMed

    Mesbah, S Morteza; Hakami, Amir; Schott, Stephan

    2012-11-01

    Cap-and-trade programs have proven to be effective instruments for achieving environmental goals while incurring minimum cost. The nature of the pollutant, however, affects the design of these programs. NO(x), an ozone precursor, is a nonuniformly mixed pollutant with a short atmospheric lifetime. NO(x) cap-and-trade programs in the U.S. are successful in reducing total NO(x) emissions but may result in suboptimal environmental performance because location-specific ozone formation potentials are neglected. In this paper, the current NO(x) cap-and-trade system is contrasted to a hypothetical NO(x) trading policy with sensitivity-based exchange rates. Location-specific exchange rates, calculated through adjoint sensitivity analysis, are combined with constrained optimization for prediction of NO(x) emissions trading behavior and post-trade ozone concentrations. The current and proposed policies are examined in a case study for 218 coal-fired power plants that participated in the NO(x) Budget Trading Program in 2007. We find that better environmental performance at negligibly higher system-wide abatement cost can be achieved through inclusion of emission exchange rates. Exposure-based exchange rates result in better environmental performance than those based on concentrations. PMID:23050674

  18. Optimal ozone reduction policy design using adjoint-based NOx marginal damage information.

    PubMed

    Mesbah, S Morteza; Hakami, Amir; Schott, Stephan

    2013-01-01

    Despite substantial reductions in nitrogen oxide (NOx) emissions in the United States, the success of emission control programs in optimal ozone reduction is disputable because they do not consider the spatial and temporal differences in health and environmental damages caused by NOx emissions. This shortcoming in the current U.S. NOx control policy is explored, and various methodologies for identifying optimal NOx emission control strategies are evaluated. The proposed approach combines an optimization platform with an adjoint (or backward) sensitivity analysis model and is able to examine the environmental performance of the current cap-and-trade policy and two damage-based emissions-differentiated policies. Using the proposed methodology, a 2007 case study of 218 U.S. electricity generation units participating in the NOx trading program is examined. The results indicate that inclusion of damage information can significantly enhance public health performance of an economic instrument. The net benefit under the policy that minimizes the social cost (i.e., health costs plus abatement costs) is six times larger than that of an exchange rate cap-and-trade policy. PMID:24144173

  19. Adjoint Monte Carlo simulation of fusion product activation probe experiment in ASDEX Upgrade tokamak

    NASA Astrophysics Data System (ADS)

    Äkäslompolo, S.; Bonheure, G.; Tardini, G.; Kurki-Suonio, T.; The ASDEX Upgrade Team

    2015-10-01

    The activation probe is a robust tool to measure flux of fusion products from a magnetically confined plasma. A carefully chosen solid sample is exposed to the flux, and the impinging ions transmute the material making it radioactive. Ultra-low level gamma-ray spectroscopy is used post mortem to measure the activity and, thus, the number of fusion products. This contribution presents the numerical analysis of the first measurement in the ASDEX Upgrade tokamak, which was also the first experiment to measure a single discharge. The ASCOT suite of codes was used to perform adjoint/reverse Monte Carlo calculations of the fusion products. The analysis facilitates, for the first time, a comparison of numerical and experimental values for absolutely calibrated flux. The results agree to within a factor of about two, which can be considered a quite good result considering the fact that all features of the plasma cannot be accounted in the simulations.Also an alternative to the present probe orientation was studied. The results suggest that a better optimized orientation could measure the flux from a significantly larger part of the plasma. A shorter version of this contribution is due to be published in PoS at: 1st EPS conference on Plasma Diagnostics

  20. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  1. Adjoint sensitivity analysis of thermoacoustic instability in a nonlinear Helmholtz solver

    NASA Astrophysics Data System (ADS)

    Juniper, Matthew; Magri, Luca

    2014-11-01

    Thermoacoustic instability is a persistent problem in aircraft and rocket engines. It occurs when heat release in the combustion chamber synchronizes with acoustic oscillations. It is always noisy and can sometimes result in catastrophic failure of the engine. Typically, the heat release from the flame is assumed to equal the acoustic velocity at a reference point multiplied by a spatially-varying function (the flame envelope) subject to a spatially-varying time delay. This models hydrodynamic perturbations convecting down the flame causing subsequent heat release perturbations. This creates an eigenvalue problem that is linear in the acoustic pressure but nonlinear in the complex frequency, omega. This can be solved as a sequence of linear eigenvalue problems in which the operators are updated with a new value of omega after each iteration. Adjoint methods find the sensitivity of each eigenmode to all the parameters simultaneously and are well suited to thermoacoustic problems because there are a few interesting eigenmodes but many influential parameters. The challenge here is to express the sensitivity of the eigenvalue at the final iteration to an arbitrary change in the parameters of the first iteration. This is a promising new technique for the control of thermoacoustics. European Research Council Grant Number 2590620.

  2. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  3. Improved forward wave propagation and adjoint-based sensitivity kernel calculations using a numerically stable finite-element PML

    NASA Astrophysics Data System (ADS)

    Xie, Zhinan; Komatitsch, Dimitri; Martin, Roland; Matzen, René

    2014-09-01

    In recent years, the application of time-domain adjoint methods to improve large, complex underground tomographic models at the regional scale has led to new challenges for the numerical simulation of forward or adjoint elastic wave propagation problems. An important challenge is to design an efficient infinite-domain truncation method suitable for accurately truncating an infinite domain governed by the second-order elastic wave equation written in displacement and computed based on a finite-element (FE) method. In this paper, we make several steps towards this goal. First, we make the 2-D convolution formulation of the complex-frequency-shifted unsplit-field perfectly matched layer (CFS-UPML) derived in previous work more flexible by providing a new treatment to analytically remove singular parameters in the formulation. We also extend this new formulation to 3-D. Furthermore, we derive the auxiliary differential equation (ADE) form of CFS-UPML, which allows for extension to higher order time schemes and is easier to implement. Secondly, we rigorously derive the CFS-UPML formulation for time-domain adjoint elastic wave problems, which to our knowledge has never been done before. Thirdly, in the case of classical low-order FE methods, we show numerically that we achieve long-time stability for both forward and adjoint problems both for the convolution and the ADE formulations. In the case of higher order Legendre spectral-element methods, we show that weak numerical instabilities can appear in both formulations, in particular if very small mesh elements are present inside the absorbing layer, but we explain how these instabilities can be delayed as much as needed by using a stretching factor to reach numerical stability in practice for applications. Fourthly, in the case of adjoint problems with perfectly matched absorbing layers we introduce a computationally efficient boundary storage strategy by saving information along the interface between the CFS-UPML and

  4. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  5. Aerosol Health Impact Source Attribution Studies with the CMAQ Adjoint Air Quality Model

    NASA Astrophysics Data System (ADS)

    Turner, M. D.

    Fine particulate matter (PM2.5) is an air pollutant consisting of a mixture of solid and liquid particles suspended in the atmosphere. Knowledge of the sources and distributions of PM2.5 is important for many reasons, two of which are that PM2.5 has an adverse effect on human health and also an effect on climate change. Recent studies have suggested that health benefits resulting from a unit decrease in black carbon (BC) are four to nine times larger than benefits resulting from an equivalent change in PM2.5 mass. The goal of this thesis is to quantify the role of emissions from different sectors and different locations in governing the total health impacts, risk, and maximum individual risk of exposure to BC both nationally and regionally in the US. We develop and use the CMAQ adjoint model to quantify the role of emissions from all modeled sectors, times, and locations on premature deaths attributed to exposure to BC. From a national analysis, we find that damages resulting from anthropogenic emissions of BC are strongly correlated with population and premature death. However, we find little correlation between damages and emission magnitude, suggesting that controls on the largest emissions may not be the most efficient means of reducing damages resulting from BC emissions. Rather, the best proxy for locations with damaging BC emissions is locations where premature deaths occur. Onroad diesel and nonroad vehicle emissions are the largest contributors to premature deaths attributed to exposure to BC, while onroad gasoline emissions cause the highest deaths per amount emitted. Additionally, emissions in fall and winter contribute to more premature deaths (and more per amount emitted) than emissions in spring and summer. From a regional analysis, we find that emissions from outside each of six urban areas account for 7% to 27% of the premature deaths attributed to exposure to BC within the region. Within the region encompassing New York City and Philadelphia

  6. Comparison of Observation Impacts in Two Forecast Systems using Adjoint Methods

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald; Langland, Rolf; Todling, Ricardo

    2009-01-01

    An experiment is being conducted to compare directly the impact of all assimilated observations on short-range forecast errors in different operational forecast systems. We use the adjoint-based method developed by Langland and Baker (2004), which allows these impacts to be efficiently calculated. This presentation describes preliminary results for a "baseline" set of observations, including both satellite radiances and conventional observations, used by the Navy/NOGAPS and NASA/GEOS-5 forecast systems for the month of January 2007. In each system, about 65% of the total reduction in 24-h forecast error is provided by satellite observations, although the impact of rawinsonde, aircraft, land, and ship-based observations remains significant. Only a small majority (50- 55%) of all observations assimilated improves the forecast, while the rest degrade it. It is found that most of the total forecast error reduction comes from observations with moderate-size innovations providing small to moderate impacts, not from outliers with very large positive or negative innovations. In a global context, the relative impacts of the major observation types are fairly similar in each system, although regional differences in observation impact can be significant. Of particular interest is the fact that while satellite radiances have a large positive impact overall, they degrade the forecast in certain locations common to both systems, especially over land and ice surfaces. Ongoing comparisons of this type, with results expected from other operational centers, should lead to more robust conclusions about the impacts of the various components of the observing system as well as about the strengths and weaknesses of the methodologies used to assimilate them.

  7. The adjoint method of data assimilation used operationally for shelf circulation

    NASA Astrophysics Data System (ADS)

    Griffin, David A.; Thompson, Keith R.

    1996-02-01

    A real-time shelf circulation model with data assimilation has been successfully used, possibly for the first time, on the outer Nova Scotian Shelf. The adjoint method was used to infer the time histories of flows across the four open boundaries of a 60 km × 60 km shallow-water equation model of Western Bank. The aim was to hindcast and nowcast currents over the bank so that a patch of water (initially 15 km in diameter) could be resampled over a 3-week period as part of a study of the early life history of Atlantic cod. Observations available in near real time for assimilation were from 14 drifting buoys, 2 telemetering moored current meters, the ship's acoustic Doppler current profiler and the local wind. For the postcruise hindcasts presented here, data from two bottom pressure gauges and two more current meters are also available. The experiment was successful, and the patch was sampled over a 19-day period that included two intense storms. In this paper we (1) document the model and how the data are assimilated, (2) present and discuss the observations, (3) demonstrate that the interpolative skill of the model exceeds that of simpler schemes that use just the current velocity data, and (4) provide examples of how particle tracking with the model enables asynoptically acquired data to be displayed as synoptic maps, greatly facilitating both underway cruise planning and postcruise data analysis. An interesting feature of the circulation on the bank was a nearly stationary eddy atop the bank crest. Larvae within the eddy were retained on the bank in a favorable environment until the onset of the storms. The variable integrity of the eddy may contribute to fluctuations of year-class success.

  8. Source identification of benzene emissions in Texas City using an adjoint neighborhood scale transport model

    NASA Astrophysics Data System (ADS)

    Guven, B.; Olaguer, E. P.; Herndon, S. C.; Kolb, C. E.; Cuclis, A.

    2012-12-01

    During the "Formaldehyde and Olefins from Large Industrial Sources" (FLAIR) study in 2009, the Aerodyne Research Inc. (ARI) mobile laboratory performed real-time in situ measurements of VOCs, NOx and HCHO in Texas City, TX on May 7, 2009 from 11 am to 3 pm. This high resolution dataset collected in a predominantly industrial area provides an ideal test bed for advanced source attribution. Our goal was to identify and quantify emission sources within the largest facility in Texas City most likely responsible for measured benzene concentrations. For this purpose, fine horizontal resolution (200 m x 200 m) 4D variational (4Dvar) inverse modeling was performed by running the HARC air quality transport model in adjoint mode based on ambient concentrations measured by the mobile laboratory. The simulations were conducted with a horizontal domain size of 4 km x 4 km for a four-hour period (11 am to 3 pm). Potential emission unit locations within the facility were specified using a high spatial resolution digital model of the largest industrial complex in the area. The HARC model was used to infer benzene emission rates from all potential source locations that would account for the benzene concentrations measured by the Aerodyne mobile laboratory in the vicinity of the facility. A Positive Matrix Factorization receptor model was also applied to the concentrations of other compounds measured by the mobile lab to support the source attribution by the inverse model. Although previous studies attributed measured benzene concentrations during the same time period to a cooling tower unit at the industrial complex, this study found that some of the flare units in the facility were also associated with the elevated benzene concentrations. The emissions of some of these flare units were found to be greater than reported in emission inventories, by up to two orders of magnitude.

  9. Accelerating forward and adjoint simulations of seismic wave propagation on large GPU-clusters

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Rietmann, M.; Charles, J.; Messmer, P.; Komatitsch, D.; Schenk, O.; Tromp, J.

    2012-12-01

    In seismic tomography, waveform inversions require accurate simulations of seismic wave propagation in complex media.The current versions of our spectral-element method (SEM) packages, the local-scale code SPECFEM3D and the global-scale code SPECFEM3D_GLOBE, are widely used open-source community codes which simulate seismic wave propagation for local-, regional- and global-scale applications. These numerical simulations compute highly accurate seismic wavefields, accounting for fully 3D Earth models. However, code performance often governs whether seismic inversions become feasible or remain elusive. We report here on extending these high-order finite-element packages to further exploit graphic processing units (GPUs) and perform numerical simulations of seismic wave propagation on large GPU clusters. These enhanced packages can be readily run either on multi-core CPUs only or together with many-core GPU acceleration devices. One of the challenges in parallelizing finite element codes is the potential for race conditions during the assembly phase. We therefore investigated different methods such as mesh coloring or atomic updates on the GPU. In order to achieve strong scaling, we needed to ensure good overlap of data motion at all levels, including internode and host-accelerator transfers. These new MPI/CUDA solvers exhibit excellent scalability and achieve speedup on a node-to-node basis over the carefully tuned equivalent multi-core MPI solver. We present case studies run on a Cray XK6 GPU architecture up to 896 nodes to demonstrate the performance of both the forward and adjoint functionality of the code packages. Running simulations on such dedicated GPU clusters further reduces computation times and pushes seismic inversions into a new, higher frequency realm.

  10. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  11. Wavelet-based multiscale adjoint waveform-difference tomography using body and surface waves

    NASA Astrophysics Data System (ADS)

    Yuan, Y. O.; Simons, F. J.; Bozdag, E.

    2014-12-01

    We present a multi-scale scheme for full elastic waveform-difference inversion. Using a wavelet transform proves to be a key factor to mitigate cycle-skipping effects. We start with coarse representations of the seismogram to correct a large-scale background model, and subsequently explain the residuals in the fine scales of the seismogram to map the heterogeneities with great complexity. We have previously applied the multi-scale approach successfully to body waves generated in a standard model from the exploration industry: a modified two-dimensional elastic Marmousi model. With this model we explored the optimal choice of wavelet family, number of vanishing moments and decomposition depth. For this presentation we explore the sensitivity of surface waves in waveform-difference tomography. The incorporation of surface waves is rife with cycle-skipping problems compared to the inversions considering body waves only. We implemented an envelope-based objective function probed via a multi-scale wavelet analysis to measure the distance between predicted and target surface-wave waveforms in a synthetic model of heterogeneous near-surface structure. Our proposed method successfully purges the local minima present in the waveform-difference misfit surface. An elastic shallow model with 100~m in depth is used to test the surface-wave inversion scheme. We also analyzed the sensitivities of surface waves and body waves in full waveform inversions, as well as the effects of incorrect density information on elastic parameter inversions. Based on those numerical experiments, we ultimately formalized a flexible scheme to consider both body and surface waves in adjoint tomography. While our early examples are constructed from exploration-style settings, our procedure will be very valuable for the study of global network data.

  12. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  13. A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System

    SciTech Connect

    Johnson, J.O.

    1992-03-01

    The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.

  14. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  15. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  16. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  17. Seismic structure of the European upper mantle based on adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Hejun; Bozdağ, Ebru; Tromp, Jeroen

    2015-04-01

    We use adjoint tomography to iteratively determine seismic models of the crust and upper mantle beneath the European continent and the North Atlantic Ocean. Three-component seismograms from 190 earthquakes recorded by 745 seismographic stations are employed in the inversion. Crustal model EPcrust combined with mantle model S362ANI comprise the 3-D starting model, EU00. Before the structural inversion, earthquake source parameters, for example, centroid moment tensors and locations, are reinverted based on global 3-D Green's functions and Fréchet derivatives. This study consists of three stages. In stage one, frequency-dependent phase differences between observed and simulated seismograms are used to constrain radially anisotropic wave speed variations. In stage two, frequency-dependent phase and amplitude measurements are combined to simultaneously constrain elastic wave speeds and anelastic attenuation. In these two stages, long-period surface waves and short-period body waves are combined to simultaneously constrain shallow and deep structures. In stage three, frequency-dependent phase and amplitude anomalies of three-component surface waves are used to simultaneously constrain radial and azimuthal anisotropy. After this three-stage inversion, we obtain a new seismic model of the European curst and upper mantle, named EU60. Improvements in misfits and histograms in both phase and amplitude help us to validate this three-stage inversion strategy. Long-wavelength elastic wave speed variations in model EU60 compare favourably with previous body- and surface wave tomographic models. Some hitherto unidentified features, such as the Adria microplate, naturally emerge from the smooth starting model. Subducting slabs, slab detachments, ancient suture zones, continental rifts and backarc basins are well resolved in model EU60. We find an anticorrelation between shear wave speed and anelastic attenuation at depths < 100 km. At greater depths, this anticorrelation becomes

  18. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  19. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  20. High-resolution Adjoint Tomography of the Eastern Venezuelan Crust using Empirical Green's Function Waveforms from Ambient Noise Interferometry

    NASA Astrophysics Data System (ADS)

    Chen, M.; Masy, J.; Niu, F.; Levander, A.

    2014-12-01

    We present a high-resolution 3D crustal model of Eastern Venezuela from a full waveform inversion adjoint tomography technique, based on the spectral-element method. Empirical Green's functions (EGFs) of Rayleigh waves from ambient noise interferometry serve as the observed waveforms. Rayleigh wave signals in the period range of 10 - 50 s were extracted by cross-correlations of 48 stations from both Venezuelan national seismic network and the BOLIVAR project array. The synthetic Green's functions (SGFs) are calculated with an initial regional 3D shear wave model determined from ballistic Rayleigh wave tomography from earthquake records with periods longer than 20 s. The frequency-dependent traveltime time misfits between the SGFs and EGFs are minimized iteratively using adjoint tomography = to refine 3D crustal structure [Chen et al. 2014]. The final 3D model shows lateral shear wave velocity variations that are well correlated with the geological terranes within the continental interior. In particular, the final model reveals low velocities distributed along the axis of the Espino Graben, indicating that the graben has a substantially different crustal structure than the rest of the Eastern Venezuela Basin. We also observe high shear velocities in the lower crust beneath some of the subterranes of the Proterozoic-Archean Guayana Shield.