Sample records for adjoint shielding code

  1. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: a case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital.

    PubMed

    Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.

  2. A User’s Manual for MASH 1.0 - A Monte Carlo Adjoint Shielding Code System

    DTIC Science & Technology

    1992-03-01

    CENTER 92B397 MANAGED BY MARTIN MARIETTA ENERGY SYSTEMS, INC. FOR THE UNITED STATES II " DEPARTMENT OF ENERGY 7?’ 0ORNL/TM-11778 Engineering Physics...managed by MARTIN MARIETTA ENERGY SYSTEMS, INC. D ,, for the . U. S. DEPARTMENT OF ENERGY . under Contract No. DE-ACO5-840R21400 DIqt This... report has been reproduced directly from the best available copy. Available to DOE and DOE contractors from the Office of Scientific and Techni- cal

  3. Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D

    NASA Technical Reports Server (NTRS)

    Carle, Alan; Fagan, Mike; Green, Lawrence L.

    1998-01-01

    This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.

  4. Skyshine analysis using energy and angular dependent dose-contribution fluxes obtained from air-over-ground adjoint calculation.

    PubMed

    Uematsu, Mikio; Kurosawa, Masahiko

    2005-01-01

    A generalised and convenient skyshine dose analysis method has been developed based on forward-adjoint folding technique. In the method, the air penetration data were prepared by performing an adjoint DOT3.5 calculation with cylindrical air-over-ground geometry having an adjoint point source (importance of unit flux to dose rate at detection point) in the centre. The accuracy of the present method was certified by comparing with DOT3.5 forward calculation. The adjoint flux data can be used as generalised radiation skyshine data for all sorts of nuclear facilities. Moreover, the present method supplies plenty of energy-angular dependent contribution flux data, which will be useful for detailed shielding design of facilities.

  5. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically, without further intervention from the model developer. This presentation will demonstrate this new technology through linear and non-linear shallow water test cases. The exceptionally simple model syntax will be highlighted and the correctness of the resulting adjoint simulations will be demonstrated using rigorous convergence tests.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  7. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  8. Adjoint of the global Eulerian-Lagrangian coupled atmospheric transport model (A-GELCA v1.0): development and validation

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry A.; Maksyutov, Shamil; Yaremchuk, Alexey; Ganshin, Alexander; Kaminski, Thomas; Blessing, Simon; Sasakawa, Motoki; Gomez-Pelaez, Angel J.; Starchenko, Alexander

    2016-02-01

    We present the development of the Adjoint of the Global Eulerian-Lagrangian Coupled Atmospheric (A-GELCA) model that consists of the National Institute for Environmental Studies (NIES) model as an Eulerian three-dimensional transport model (TM), and FLEXPART (FLEXible PARTicle dispersion model) as the Lagrangian Particle Dispersion Model (LPDM). The forward tangent linear and adjoint components of the Eulerian model were constructed directly from the original NIES TM code using an automatic differentiation tool known as TAF (Transformation of Algorithms in Fortran; http://www.FastOpt.com, with additional manual pre- and post-processing aimed at improving transparency and clarity of the code and optimizing the performance of the computing, including MPI (Message Passing Interface). The Lagrangian component did not require any code modification, as LPDMs are self-adjoint and track a significant number of particles backward in time in order to calculate the sensitivity of the observations to the neighboring emission areas. The constructed Eulerian adjoint was coupled with the Lagrangian component at a time boundary in the global domain. The simulations presented in this work were performed using the A-GELCA model in forward and adjoint modes. The forward simulation shows that the coupled model improves reproduction of the seasonal cycle and short-term variability of CO2. Mean bias and standard deviation for five of the six Siberian sites considered decrease roughly by 1 ppm when using the coupled model. The adjoint of the Eulerian model was shown, through several numerical tests, to be very accurate (within machine epsilon with mismatch around to ±6 e-14) compared to direct forward sensitivity calculations. The developed adjoint of the coupled model combines the flux conservation and stability of an Eulerian discrete adjoint formulation with the flexibility, accuracy, and high resolution of a Lagrangian backward trajectory formulation. A-GELCA will be incorporated into a variational inversion system designed to optimize surface fluxes of greenhouse gases.

  9. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  10. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  11. Reentry-Vehicle Shape Optimization Using a Cartesian Adjoint Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2006-01-01

    A DJOINT solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (e.g., geometric parameters that control the shape). Classic aerodynamic applications of gradient-based optimization include the design of cruise configurations for transonic and supersonic flow, as well as the design of high-lift systems. are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric computer-aided design (CAD). In previous work on Cartesian adjoint solvers, Melvin et al. developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the two-dimensional Euler equations using a ghost-cell method to enforce the wall boundary conditions. In Refs. 18 and 19, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm were the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The accuracy of the gradient computation was verified using several three-dimensional test cases, which included design variables such as the free stream parameters and the planform shape of an isolated wing. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Factors under consideration include the computation of mesh sensitivities that provide a reliable approximation of the objective function gradient, as well as the computation of surface shape sensitivities based on a direct-CAD interface. We present detailed gradient verification studies and then focus on a shape optimization problem for an Apollo-like reentry vehicle. The goal of the optimization is to enhance the lift-to-drag ratio of the capsule by modifying the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design.

  12. Additional adjoint Monte Carlo studies of the shielding of concrete structures against initial gamma radiation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.; Cohen, M.O.

    1975-02-01

    The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less

  13. Global Modeling and Data Assimilation. Volume 11; Documentation of the Tangent Linear and Adjoint Models of the Relaxed Arakawa-Schubert Moisture Parameterization of the NASA GEOS-1 GCM; 5.2

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael

    1997-01-01

    A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.

  14. A shielding application of perturbation theory to determine changes in neutron and gamma doses due to changes in shield layers

    NASA Technical Reports Server (NTRS)

    Fieno, D.

    1972-01-01

    The perturbation theory for fixed sources was applied to radiation shielding problems to determine changes in neutron and gamma ray doses due to changes in various shield layers. For a given source and detector position the perturbation method enables dose derivatives due to all layer changes to be determined from one forward and one inhomogeneous adjoint calculation. The direct approach requires two forward calculations for the derivative due to a single layer change. Hence, the perturbation method for obtaining dose derivatives permits an appreciable savings in computation for a multilayered shield. For an illustrative problem, a comparison was made of the fractional change in the dose per unit change in the thickness of each shield layer as calculated by perturbation theory and by successive direct calculations; excellent agreement was obtained between the two methods.

  15. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  16. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  17. Modeling Sound Propagation Through Non-Axisymmetric Jets

    NASA Technical Reports Server (NTRS)

    Leib, Stewart J.

    2014-01-01

    A method for computing the far-field adjoint Green's function of the generalized acoustic analogy equations under a locally parallel mean flow approximation is presented. The method is based on expanding the mean-flow-dependent coefficients in the governing equation and the scalar Green's function in truncated Fourier series in the azimuthal direction and a finite difference approximation in the radial direction in circular cylindrical coordinates. The combined spectral/finite difference method yields a highly banded system of algebraic equations that can be efficiently solved using a standard sparse system solver. The method is applied to test cases, with mean flow specified by analytical functions, corresponding to two noise reduction concepts of current interest: the offset jet and the fluid shield. Sample results for the Green's function are given for these two test cases and recommendations made as to the use of the method as part of a RANS-based jet noise prediction code.

  18. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  19. Sensitivities of Greenland ice sheet volume inferred from an ice sheet adjoint model

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Bugnion, V.

    2009-04-01

    We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. Since its introduction by MacAyeal (1992), the adjoint method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters such as basal conditions. However, no attempt has been made to extend this method to comprehensive ice sheet models. As a first step toward the use of adjoints of comprehensive three-dimensional ice sheet models we have generated an adjoint of the ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated by means of the automatic differentiation (AD) tool TAF. The AD tool generates exact source code representing the tangent linear and adjoint model of the nonlinear parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic with respect to the controls, and can be efficiently calculated via the adjoint. By way of example, we determine the sensitivity of the total Greenland ice volume to various control variables, such as spatial fields of basal flow parameters, surface and basal forcings, and initial conditions. Reliability of the adjoint was tested through finite-difference perturbation calculations for various control variables and perturbation regions. Besides confirming qualitative aspects of ice sheet sensitivities, such as expected regional variations, we detect regions where model sensitivities are seemingly unexpected or counter-intuitive, albeit ``real'' in the sense of actual model behavior. An example is inferred regions where sensitivities of ice sheet volume to basal sliding coefficient are positive, i.e. where a local increase in basal sliding parameter increases the ice sheet volume. Similarly, positive ice temperature sensitivities in certain parts of the ice sheet are found (in most regions it is negativ, i.e. an increase in temperature decreases ice sheet volume), the detection of which seems highly unlikely if only conventional perturbation experiments had been used. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. Available adjoint code generation tools now open up a variety of novel model applications, notably with regard to sensitivity and uncertainty analyses and ice sheet state estimation or data assimilation.

  20. Improved Hybrid Modeling of Spent Fuel Storage Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibber, Karl van

    This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less

  1. Novel numerical techniques for magma dynamics

    NASA Astrophysics Data System (ADS)

    Rhebergen, S.; Katz, R. F.; Wathen, A.; Alisic, L.; Rudge, J. F.; Wells, G.

    2013-12-01

    We discuss the development of finite element techniques and solvers for magma dynamics computations. These are implemented within the FEniCS framework. This approach allows for user-friendly, expressive, high-level code development, but also provides access to powerful, scalable numerical solvers and a large family of finite element discretisations. With the recent addition of dolfin-adjoint, FeniCS supports automated adjoint and tangent-linear models, enabling the rapid development of Generalised Stability Analysis. The ability to easily scale codes to three dimensions with large meshes, and/or to apply intricate adjoint calculations means that efficiency of the numerical algorithms is vital. We therefore describe our development and analysis of preconditioners designed specifically for finite element discretizations of equations governing magma dynamics. The preconditioners are based on Elman-Silvester-Wathen methods for the Stokes equation, and we extend these to flows with compaction. Our simulations are validated by comparison of results with laboratory experiments on partially molten aggregates.

  2. Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions

    NASA Astrophysics Data System (ADS)

    Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.

    2017-12-01

    Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.

  3. New Methodologies for Generation of Multigroup Cross Sections for Shielding Applications

    NASA Astrophysics Data System (ADS)

    Arzu Alpan, F.; Haghighat, Alireza

    2003-06-01

    Coupled neutron and gamma multigroup (broad-group) libraries used for Light Water Reactor shielding and dosimetry commonly include 47-neutron and 20-gamma groups. These libraries are derived from the 199-neutron, 42-gamma fine-group VITAMIN-B6 library. In this paper, we introduce modifications to the generation procedure of the broad-group libraries. Among these modifications, we show that the fine-group structure and collapsing technique have the largest impact. We demonstrate that a more refined fine-group library and the bi-linear adjoint weighting collapsing technique can improve the accuracy of transport calculation results.

  4. Using MCBEND for neutron or gamma-ray deterministic calculations

    NASA Astrophysics Data System (ADS)

    Geoff, Dobson; Adam, Bird; Brendan, Tollit; Paul, Smith

    2017-09-01

    MCBEND 11 is the latest version of the general radiation transport Monte Carlo code from AMEC Foster Wheeler's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. MCBEND supports a number of acceleration techniques, for example the use of an importance map in conjunction with Splitting/Russian Roulette. MCBEND has a well established automated tool to generate this importance map, commonly referred to as the MAGIC module using a diffusion adjoint solution. This method is fully integrated with the MCBEND geometry and material specification, and can easily be run as part of a normal MCBEND calculation. An often overlooked feature of MCBEND is the ability to use this method for forward scoping calculations, which can be run as a very quick deterministic method. Additionally, the development of the Visual Workshop environment for results display provides new capabilities for the use of the forward calculation as a productivity tool. In this paper, we illustrate the use of the combination of the old and new in order to provide an enhanced analysis capability. We also explore the use of more advanced deterministic methods for scoping calculations used in conjunction with MCBEND, with a view to providing a suite of methods to accompany the main Monte Carlo solver.

  5. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.

  6. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  7. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.

  8. Shielding application of perturbation theory to determine changes in neutron and gamma doses due to changes in shield layers

    NASA Technical Reports Server (NTRS)

    Fieno, D.

    1972-01-01

    Perturbation theory formulas were derived and applied to determine changes in neutron and gamma-ray doses due to changes in various radiation shield layers for fixed sources. For a given source and detector position, the perturbation method enables dose derivatives with respect to density, or equivalently thickness, for every layer to be determined from one forward and one inhomogeneous adjoint calculation. A direct determination without the perturbation approach would require two forward calculations to evaluate the dose derivative due to a change in a single layer. Hence, the perturbation method for obtaining dose derivatives requires fewer computations for design studies of multilayer shields. For an illustrative problem, a comparison was made of the fractional change in the dose per unit change in the thickness of each shield layer in a two-layer spherical configuration as calculated by perturbation theory and by successive direct calculations; excellent agreement was obtained between the two methods.

  9. A demonstration of adjoint methods for multi-dimensional remote sensing of the atmosphere and surface

    NASA Astrophysics Data System (ADS)

    Martin, William G. K.; Hasekamp, Otto P.

    2018-01-01

    In previous work, we derived the adjoint method as a computationally efficient path to three-dimensional (3D) retrievals of clouds and aerosols. In this paper we will demonstrate the use of adjoint methods for retrieving two-dimensional (2D) fields of cloud extinction. The demonstration uses a new 2D radiative transfer solver (FSDOM). This radiation code was augmented with adjoint methods to allow efficient derivative calculations needed to retrieve cloud and surface properties from multi-angle reflectance measurements. The code was then used in three synthetic retrieval studies. Our retrieval algorithm adjusts the cloud extinction field and surface albedo to minimize the measurement misfit function with a gradient-based, quasi-Newton approach. At each step we compute the value of the misfit function and its gradient with two calls to the solver FSDOM. First we solve the forward radiative transfer equation to compute the residual misfit with measurements, and second we solve the adjoint radiative transfer equation to compute the gradient of the misfit function with respect to all unknowns. The synthetic retrieval studies verify that adjoint methods are scalable to retrieval problems with many measurements and unknowns. We can retrieve the vertically-integrated optical depth of moderately thick clouds as a function of the horizontal coordinate. It is also possible to retrieve the vertical profile of clouds that are separated by clear regions. The vertical profile retrievals improve for smaller cloud fractions. This leads to the conclusion that cloud edges actually increase the amount of information that is available for retrieving the vertical profile of clouds. However, to exploit this information one must retrieve the horizontally heterogeneous cloud properties with a 2D (or 3D) model. This prototype shows that adjoint methods can efficiently compute the gradient of the misfit function. This work paves the way for the application of similar methods to 3D remote sensing problems.

  10. An adjoint-based framework for maximizing mixing in binary fluids

    NASA Astrophysics Data System (ADS)

    Eggl, Maximilian; Schmid, Peter

    2017-11-01

    Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.

  11. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  12. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  13. The fast neutron fluence and the activation detector activity calculations using the effective source method and the adjoint function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hep, J.; Konecna, A.; Krysl, V.

    2011-07-01

    This paper describes the application of effective source in forward calculations and the adjoint method to the solution of fast neutron fluence and activation detector activities in the reactor pressure vessel (RPV) and RPV cavity of a VVER-440 reactor. Its objective is the demonstration of both methods on a practical task. The effective source method applies the Boltzmann transport operator to time integrated source data in order to obtain neutron fluence and detector activities. By weighting the source data by time dependent decay of the detector activity, the result of the calculation is the detector activity. Alternatively, if the weightingmore » is uniform with respect to time, the result is the fluence. The approach works because of the inherent linearity of radiation transport in non-multiplying time-invariant media. Integrated in this way, the source data are referred to as the effective source. The effective source in the forward calculations method thereby enables the analyst to replace numerous intensive transport calculations with a single transport calculation in which the time dependence and magnitude of the source are correctly represented. In this work, the effective source method has been expanded slightly in the following way: neutron source data were performed with few group method calculation using the active core calculation code MOBY-DICK. The follow-up neutron transport calculation was performed using the neutron transport code TORT to perform multigroup calculations. For comparison, an alternative method of calculation has been used based upon adjoint functions of the Boltzmann transport equation. Calculation of the three-dimensional (3-D) adjoint function for each required computational outcome has been obtained using the deterministic code TORT and the cross section library BGL440. Adjoint functions appropriate to the required fast neutron flux density and neutron reaction rates have been calculated for several significant points within the RPV and RPV cavity of the VVER-440 reacto rand located axially at the position of maximum power and at the position of the weld. Both of these methods (the effective source and the adjoint function) are briefly described in the present paper. The paper also describes their application to the solution of fast neutron fluence and detectors activities for the VVER-440 reactor. (authors)« less

  14. A comparison of discrete versus continuous adjoint states to invert groundwater flow in heterogeneous dual porosity systems

    NASA Astrophysics Data System (ADS)

    Delay, Frederick; Badri, Hamid; Fahs, Marwan; Ackerer, Philippe

    2017-12-01

    Dual porosity models become increasingly used for simulating groundwater flow at the large scale in fractured porous media. In this context, model inversions with the aim of retrieving the system heterogeneity are frequently faced with huge parameterizations for which descent methods of inversion with the assistance of adjoint state calculations are well suited. We compare the performance of discrete and continuous forms of adjoint states associated with the flow equations in a dual porosity system. The discrete form inherits from previous works by some of the authors, as the continuous form is completely new and here fully differentiated for handling all types of model parameters. Adjoint states assist descent methods by calculating the gradient components of the objective function, these being a key to good convergence of inverse solutions. Our comparison on the basis of synthetic exercises show that both discrete and continuous adjoint states can provide very similar solutions close to reference. For highly heterogeneous systems, the calculation grid of the continuous form cannot be too coarse, otherwise the method may show lack of convergence. This notwithstanding, the continuous adjoint state is the most versatile form as its non-intrusive character allows for plugging an inversion toolbox quasi-independent from the code employed for solving the forward problem.

  15. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  16. SOC-DS computer code provides tool for design evaluation of homogeneous two-material nuclear shield

    NASA Technical Reports Server (NTRS)

    Disney, R. K.; Ricks, L. O.

    1967-01-01

    SOC-DS Code /Shield Optimization Code-Direc Search/, selects a nuclear shield material of optimum volume, weight, or cost to meet the requirments of a given radiation dose rate or energy transmission constraint. It is applicable to evaluating neutron and gamma ray shields for all nuclear reactors.

  17. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented an effective optimization framework that incorporates a direct-CAD interface. In this work, we enhance the capabilities of this framework with efficient gradient computations using the discrete adjoint method. We present details of the adjoint numerical implementation, which reuses the domain decomposition, multigrid, and time-marching schemes of the flow solver. Furthermore, we explain and demonstrate the use of CAD in conjunction with the Cartesian adjoint approach. The final paper will contain a number of complex geometry, industrially relevant examples with many design variables to demonstrate the effectiveness of the adjoint method on Cartesian meshes.

  18. The DOPEX code: An application of the method of steepest descent to laminated-shield-weight optimization with several constraints

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.

  19. Doppler Temperature Coefficient Calculations Using Adjoint-Weighted Tallies and Continuous Energy Cross Sections in MCNP6

    NASA Astrophysics Data System (ADS)

    Gonzales, Matthew Alejandro

    The calculation of the thermal neutron Doppler temperature reactivity feedback co-efficient, a key parameter in the design and safe operation of advanced reactors, using first order perturbation theory in continuous energy Monte Carlo codes is challenging as the continuous energy adjoint flux is not readily available. Traditional approaches of obtaining the adjoint flux attempt to invert the random walk process as well as require data corresponding to all temperatures and their respective temperature derivatives within the system in order to accurately calculate the Doppler temperature feedback. A new method has been developed using adjoint-weighted tallies and On-The-Fly (OTF) generated continuous energy cross sections within the Monte Carlo N-Particle (MCNP6) transport code. The adjoint-weighted tallies are generated during the continuous energy k-eigenvalue Monte Carlo calculation. The weighting is based upon the iterated fission probability interpretation of the adjoint flux, which is the steady state population in a critical nuclear reactor caused by a neutron introduced at that point in phase space. The adjoint-weighted tallies are produced in a forward calculation and do not require an inversion of the random walk. The OTF cross section database uses a high order functional expansion between points on a user-defined energy-temperature mesh in which the coefficients with respect to a polynomial fitting in temperature are stored. The coefficients of the fits are generated before run- time and called upon during the simulation to produce cross sections at any given energy and temperature. The polynomial form of the OTF cross sections allows the possibility of obtaining temperature derivatives of the cross sections on-the-fly. The use of Monte Carlo sampling of adjoint-weighted tallies and the capability of computing derivatives of continuous energy cross sections with respect to temperature are used to calculate the Doppler temperature coefficient in a research version of MCNP6. Temperature feedback results from the cross sections themselves, changes in the probability density functions, as well as changes in the density of the materials. The focus of this work is specific to the Doppler temperature feedback which result from Doppler broadening of cross sections as well as changes in the probability density function within the scattering kernel. This method is compared against published results using Mosteller's numerical benchmark to show accurate evaluations of the Doppler temperature coefficient, fuel assembly calculations, and a benchmark solution based on the heavy gas model for free-gas elastic scattering. An infinite medium benchmark for neutron free gas elastic scattering for large scattering ratios and constant absorption cross section has been developed using the heavy gas model. An exact closed form solution for the neutron energy spectrum is obtained in terms of the confluent hypergeometric function and compared against spectra for the free gas scattering model in MCNP6. Results show a quick increase in convergence of the analytic energy spectrum to the MCNP6 code with increasing target size, showing absolute relative differences of less than 5% for neutrons scattering with carbon. The analytic solution has been generalized to accommodate piecewise constant in energy absorption cross section to produce temperature feedback. Results reinforce the constraints in which heavy gas theory may be applied resulting in a significant target size to accommodate increasing cross section structure. The energy dependent piecewise constant cross section heavy gas model was used to produce a benchmark calculation of the Doppler temperature coefficient to show accurate calculations when using the adjoint-weighted method. Results show the Doppler temperature coefficient using adjoint weighting and cross section derivatives accurately obtains the correct solution within statistics as well as reduce computer runtimes by a factor of 50.

  20. Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Anderson, W. Kyle

    2000-01-01

    Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.

  1. Detecting Shielded Special Nuclear Materials Using Multi-Dimensional Neutron Source and Detector Geometries

    NASA Astrophysics Data System (ADS)

    Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard

    2016-10-01

    A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.

  2. A New Method for Computing Three-Dimensional Capture Fraction in Heterogeneous Regional Systems using the MODFLOW Adjoint Code

    NASA Astrophysics Data System (ADS)

    Clemo, T. M.; Ramarao, B.; Kelly, V. A.; Lavenue, M.

    2011-12-01

    Capture is a measure of the impact of groundwater pumping upon groundwater and surface water systems. The computation of capture through analytical or numerical methods has been the subject of articles in the literature for several decades (Bredehoeft et al., 1982). Most recently Leake et al. (2010) described a systematic way to produce capture maps in three-dimensional systems using a numerical perturbation approach in which capture from streams was computed using unit rate pumping at many locations within a MODFLOW model. The Leake et al. (2010) method advances the current state of computing capture. A limitation stems from the computational demand required by the perturbation approach wherein days or weeks of computational time might be required to obtain a robust measure of capture. In this paper, we present an efficient method to compute capture in three-dimensional systems based upon adjoint states. The efficiency of the adjoint method will enable uncertainty analysis to be conducted on capture calculations. The USGS and INTERA have collaborated to extend the MODFLOW Adjoint code (Clemo, 2007) to include stream-aquifer interaction and have applied it to one of the examples used in Leake et al. (2010), the San Pedro Basin MODFLOW model. With five layers and 140,800 grid blocks per layer, the San Pedro Basin model, provided an ideal example data set to compare the capture computed from the perturbation and the adjoint methods. The capture fraction map produced from the perturbation method for the San Pedro Basin model required significant computational time to compute and therefore the locations for the pumping wells were limited to 1530 locations in layer 4. The 1530 direct simulations of capture require approximately 76 CPU hours. Had capture been simulated in each grid block in each layer, as is done in the adjoint method, the CPU time would have been on the order of 4 years. The MODFLOW-Adjoint produced the capture fraction map of the San Pedro Basin model at 704,000 grid blocks (140,800 grid blocks x 5 layers) in just 6 minutes. The capture fraction maps from the perturbation and adjoint methods agree closely. The results of this study indicate that the adjoint capture method and its associated computational efficiency will enable scientists and engineers facing water resource management decisions to evaluate the sensitivity and uncertainty of impacts to regional water resource systems as part of groundwater supply strategies. Bredehoeft, J.D., S.S. Papadopulos, and H.H. Cooper Jr, Groundwater: The water budget myth. In Scientific Basis of Water-Resources Management, ed. National Research Council (U.S.), Geophysical Study Committee, 51-57. Washington D.C.: National Academy Press, 1982. Clemo, Tom, MODFLOW-2005 Ground-Water Model-Users Guide to Adjoint State based Sensitivity Process (ADJ), BSU CGISS 07-01, Center for the Geophysical Investigation of the Shallow Subsurface, Boise State University, 2007. Leake, S.A., H.W. Reeves, and J.E. Dickinson, A New Capture Fraction Method to Map How Pumpage Affects Surface Water Flow, Ground Water, 48(5), 670-700, 2010.

  3. Optimal boundary conditions for ORCA-2 model

    NASA Astrophysics Data System (ADS)

    Kazantsev, Eugene

    2013-08-01

    A 4D-Var data assimilation technique is applied to ORCA-2 configuration of the NEMO in order to identify the optimal parametrization of boundary conditions on the lateral boundaries as well as on the bottom and on the surface of the ocean. The influence of boundary conditions on the solution is analyzed both within and beyond the assimilation window. It is shown that the optimal bottom and surface boundary conditions allow us to better represent the jet streams, such as Gulf Stream and Kuroshio. Analyzing the reasons of the jets reinforcement, we notice that data assimilation has a major impact on parametrization of the bottom boundary conditions for u and v. Automatic generation of the tangent and adjoint codes is also discussed. Tapenade software is shown to be able to produce the adjoint code that can be used after a memory usage optimization.

  4. Adjoint-state inversion of electric resistivity tomography data of seawater intrusion at the Argentona coastal aquifer (Spain)

    NASA Astrophysics Data System (ADS)

    Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián

    2016-04-01

    Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.

  5. Description of Transport Codes for Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.

    2011-01-01

    This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.

  6. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  7. Practical Aerodynamic Design Optimization Based on the Navier-Stokes Equations and a Discrete Adjoint Method

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1999-01-01

    Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing. In order for large scale optimization to become routine, the benefits of parallel architectures should be exploited. Although the flow solver has been parallelized using compiler directives. The parallel efficiency is under 50 percent. Clearly, parallel versions of the codes will have an immediate impact on the ability to design realistic configurations on fine meshes, and this effort is currently underway.

  8. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  9. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  10. Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1999-01-01

    Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.

  11. Development of the WRF-CO2 4D-Var assimilation system v1.0

    NASA Astrophysics Data System (ADS)

    Zheng, Tao; French, Nancy H. F.; Baxter, Martin

    2018-05-01

    Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.

  12. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  13. Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres

    NASA Astrophysics Data System (ADS)

    Liu, Quanhua; Weng, Fuzhong

    2006-12-01

    The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.

  14. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  15. McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Stedry, M.H.

    1994-07-01

    McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less

  16. Simulation and Optimization of an Airfoil with Leading Edge Slat

    NASA Astrophysics Data System (ADS)

    Schramm, Matthias; Stoevesandt, Bernhard; Peinke, Joachim

    2016-09-01

    A gradient-based optimization is used in order to improve the shape of a leading edge slat upstream of a DU 91-W2-250 airfoil. The simulations are performed by solving the Reynolds-Averaged Navier-Stokes equations (RANS) using the open source CFD code OpenFOAM. Gradients are computed via the adjoint approach, which is suitable to deal with many design parameters, but keeping the computational costs low. The implementation is verified by comparing the gradients from the adjoint method with gradients obtained by finite differences for a NACA 0012 airfoil. The simulations of the leading edge slat are validated against measurements from the acoustic wind tunnel of Oldenburg University at a Reynolds number of Re = 6 • 105. The shape of the slat is optimized using the adjoint approach resulting in a drag reduction of 2%. Although the optimization is done for Re = 6 • 105, the improvements also hold for a higher Reynolds number of Re = 7.9 • 106, which is more realistic at modern wind turbines.

  17. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  18. Time-domain seismic modeling in viscoelastic media for full waveform inversion on heterogeneous computing platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard

    2017-03-01

    Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.

  19. Variational differential equations for engineering type trajectories close to a planet with an atmosphere

    NASA Technical Reports Server (NTRS)

    Dickmanns, E. D.

    1972-01-01

    The differential equations for the adjoint variables are derived and coded in FORTRAN. The program is written in a form to either take into account or neglect thrust, aerodynamic forces, planet rotation and oblateness, and altitude dependent winds.

  20. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  1. Raw Pressure Data from Boise Hydrogeophysical Research Site (BHRS)

    DOE Data Explorer

    David Lim

    2013-07-17

    Pressure data from a phreatic aquifer was collected in the summer of 2013 during Multi-frequency Oscillatory Hydraulic Tomography pumping tests. All tests were performed at the Boise Hydrogeophysical Research Site. The data will be inverted using a fast steady-periodic adjoint-based inverse code.

  2. Equilibrium sensitivities of the Greenland ice sheet inferred from the adjoint of the three- dimensional thermo-mechanical model SICOPOLIS

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Bugnion, V.

    2008-12-01

    We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. MacAyeal (1992) introduced adjoints in the context of applying control theory to estimate basal sliding parameters (basal shear stress, basal friction) of an ice stream model which minimize a least-squares model vs. observation misfit. Since then, this method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters. However, no attempt has been made to extend this method to comprehensive ice sheet models. Here, we present a first step toward moving beyond limiting the use of control theory to ice stream models. We have generated an adjoint of the three-dimensional thermo-mechanical ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated using the automatic differentiation (AD) tool TAF. TAF generates exact source code representing the tangent linear and adjoint model of the parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic or "cost function" with respect to the controls, and can be efficiently calculated via the adjoint. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. To gain insight into the adjoint solutions, we explore various cost functions, such as local and domain-integrated ice temperature, total ice volume or the velocity of ice at the margins of the ice sheet. Elements of our control space include initial cold ice temperatures, surface mass balance, as well as parameters such as appear in Glen's flow law, or in the surface degree-day or basal sliding parameterizations. Sensitivity maps provide a comprehensive view, and allow a quantification of where and to which variables the ice sheet model is most sensitive to. The model used in the present study includes simplifications in the model physics, parameterizations which rely on uncertain empirical constants, and is unable to capture fast ice streams. Nevertheless, as a proof-of-concept, this method can readily be extended to incorporate higher-order physics or parameterizations (or be applied to other models). It also opens the door to ice sheet state estimation: using the model's physics jointly with field and satellite observations to estimate a best estimate of the state of the ice sheets.

  3. A Fast Code for Jupiter Atmospheric Entry Analysis

    NASA Technical Reports Server (NTRS)

    Yauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq

    1999-01-01

    A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry. The calculation required 3.5 sec of CPU time on a work station, or three to four orders of magnitude less than for previous Jovian entry heat shields. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 13.7% too high. The forebody's mass loss was overpredicted by 5.3% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was satisfactory in view of the code's fast running time and the methods' approximations.

  4. Practical Aerodynamic Design Optimization Based on the Navier-Stokes Equations and a Discrete Adjoint Method

    NASA Technical Reports Server (NTRS)

    Grossman, Bernard

    1999-01-01

    The technical details are summarized below: Compressible and incompressible versions of a three-dimensional unstructured mesh Reynolds-averaged Navier-Stokes flow solver have been differentiated and resulting derivatives have been verified by comparisons with finite differences and a complex-variable approach. In this implementation, the turbulence model is fully coupled with the flow equations in order to achieve this consistency. The accuracy demonstrated in the current work represents the first time that such an approach has been successfully implemented. The accuracy of a number of simplifying approximations to the linearizations of the residual have been examined. A first-order approximation to the dependent variables in both the adjoint and design equations has been investigated. The effects of a "frozen" eddy viscosity and the ramifications of neglecting some mesh sensitivity terms were also examined. It has been found that none of the approximations yielded derivatives of acceptable accuracy and were often of incorrect sign. However, numerical experiments indicate that an incomplete convergence of the adjoint system often yield sufficiently accurate derivatives, thereby significantly lowering the time required for computing sensitivity information. The convergence rate of the adjoint solver relative to the flow solver has been examined. Inviscid adjoint solutions typically require one to four times the cost of a flow solution, while for turbulent adjoint computations, this ratio can reach as high as eight to ten. Numerical experiments have shown that the adjoint solver can stall before converging the solution to machine accuracy, particularly for viscous cases. A possible remedy for this phenomenon would be to include the complete higher-order linearization in the preconditioning step, or to employ a simple form of mesh sequencing to obtain better approximations to the solution through the use of coarser meshes. . An efficient surface parameterization based on a free-form deformation technique has been utilized and the resulting codes have been integrated with an optimization package. Lastly, sample optimizations have been shown for inviscid and turbulent flow over an ONERA M6 wing. Drag reductions have been demonstrated by reducing shock strengths across the span of the wing.

  5. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  6. Application of Adjoint Method and Spectral-Element Method to Tomographic Inversion of Regional Seismological Structure Beneath Japanese Islands

    NASA Astrophysics Data System (ADS)

    Tsuboi, S.; Miyoshi, T.; Obayashi, M.; Tono, Y.; Ando, K.

    2014-12-01

    Recent progress in large scale computing by using waveform modeling technique and high performance computing facility has demonstrated possibilities to perform full-waveform inversion of three dimensional (3D) seismological structure inside the Earth. We apply the adjoint method (Liu and Tromp, 2006) to obtain 3D structure beneath Japanese Islands. First we implemented Spectral-Element Method to K-computer in Kobe, Japan. We have optimized SPECFEM3D_GLOBE (Komatitsch and Tromp, 2002) by using OpenMP so that the code fits hybrid architecture of K-computer. Now we could use 82,134 nodes of K-computer (657,072 cores) to compute synthetic waveform with about 1 sec accuracy for realistic 3D Earth model and its performance was 1.2 PFLOPS. We use this optimized SPECFEM3D_GLOBE code and take one chunk around Japanese Islands from global mesh and compute synthetic seismograms with accuracy of about 10 second. We use GAP-P2 mantle tomography model (Obayashi et al., 2009) as an initial 3D model and use as many broadband seismic stations available in this region as possible to perform inversion. We then use the time windows for body waves and surface waves to compute adjoint sources and calculate adjoint kernels for seismic structure. We have performed several iteration and obtained improved 3D structure beneath Japanese Islands. The result demonstrates that waveform misfits between observed and theoretical seismograms improves as the iteration proceeds. We now prepare to use much shorter period in our synthetic waveform computation and try to obtain seismic structure for basin scale model, such as Kanto basin, where there are dense seismic network and high seismic activity. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We used F-net seismograms of the National Research Institute for Earth Science and Disaster Prevention.

  7. Transport calculations and accelerator experiments needed for radiation risk assessment in space.

    PubMed

    Sihver, Lembit

    2008-01-01

    The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.

  8. A Radiation Shielding Code for Spacecraft and Its Validation

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.

    2000-01-01

    The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.

  9. Design of orbital debris shields for oblique hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1994-01-01

    A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.

  10. A Fast Code for Jupiter Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.; Wercinski, Paul; Yang, Lily; Chen, Yih-Kanq; Arnold, James (Technical Monitor)

    1998-01-01

    A fast code was developed to calculate the forebody heating environment and heat shielding that is required for Jupiter atmospheric entry probes. A carbon phenolic heat shield material was assumed and, since computational efficiency was a major goal, analytic expressions were used, primarily, to calculate the heating, ablation and the required insulation. The code was verified by comparison with flight measurements from the Galileo probe's entry; the calculation required 3.5 sec of CPU time on a work station. The computed surface recessions from ablation were compared with the flight values at six body stations. The average, absolute, predicted difference in the recession was 12.5% too high. The forebody's mass loss was overpredicted by 5.5% and the heat shield mass was calculated to be 15% less than the probe's actual heat shield. However, the calculated heat shield mass did not include contingencies for the various uncertainties that must be considered in the design of probes. Therefore, the agreement with the Galileo probe's values was considered satisfactory, especially in view of the code's fast running time and the methods' approximations.

  11. Shielding Analyses for VISION Beam Line at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina; Gallmeier, Franz X

    2014-01-01

    Full-scale neutron and gamma transport analyses were performed to design shielding around the VISION beam line, instrument shielding enclosure, beam stop, secondary shutter including a temporary beam stop for the still closed neighboring beam line to meet requirement is to achieve dose rates below 0.25 mrem/h at 30 cm from the shielding surface. The beam stop and the temporary beam stop analyses were performed with the discrete ordinate code DORT additionally to Monte Carlo analyses with the MCNPX code. Comparison of the results is presented.

  12. Application of perturbation theory to lattice calculations based on method of cyclic characteristics

    NASA Astrophysics Data System (ADS)

    Assawaroongruengchot, Monchai

    Perturbation theory is a technique used for the estimation of changes in performance functionals, such as linear reaction rate ratio and eigenvalue affected by small variations in reactor core compositions. Here the algorithm of perturbation theory is developed for the multigroup integral neutron transport problems in 2D fuel assemblies with isotropic scattering. The integral transport equation is used in the perturbative formulation because it represents the interconnecting neutronic systems of the lattice assemblies via the tracking lines. When the integral neutron transport equation is used in the formulation, one needs to solve the resulting integral transport equations for the flux importance and generalized flux importance functions. The relationship between the generalized flux importance and generalized source importance functions is defined in order to transform the generalized flux importance transport equations into the integro-differential equations for the generalized adjoints. Next we develop the adjoint and generalized adjoint transport solution algorithms based on the method of cyclic characteristics (MOCC) in DRAGON code. In the MOCC method, the adjoint characteristics equations associated with a cyclic tracking line are formulated in such a way that a closed form for the adjoint angular function can be obtained. The MOCC method then requires only one cycle of scanning over the cyclic tracking lines in each spatial iteration. We also show that the source importance function by CP method is mathematically equivalent to the adjoint function by MOCC method. In order to speed up the MOCC solution algorithm, a group-reduction and group-splitting techniques based on the structure of the adjoint scattering matrix are implemented. A combined forward flux/adjoint function iteration scheme, based on the group-splitting technique and the common use of a large number of variables storing tracking-line data and exponential values, is proposed to reduce the computing time when both direct and adjoint solutions are required. A problem that arises for the generalized adjoint problem is that the direct use of the negative external generalized adjoint sources in the adjoint solution algorithm results in negative generalized adjoint functions. A coupled flux biasing/decontamination scheme is applied to make the generalized adjoint functions positive using the adjoint functions in such a way that it can be used for the multigroup rebalance technique. Next we consider the application of the perturbation theory to the reactor problems. Since the coolant void reactivity (CVR) is a important factor in reactor safety analysis, we have decided to select this parameter for optimization studies. We consider the optimization and adjoint sensitivity techniques for the adjustments of CVR at beginning of burnup cycle (BOC) and k eff at end of burnup cycle (EOC) for a 2D Advanced CANDU Reactor (ACR) lattice. The sensitivity coefficients are evaluated using the perturbation theory based on the integral transport equations. Three sets of parameters for CVR-BOC and keff-EOC adjustments are studied: (1) Dysprosium density in the central pin with Uranium enrichment in the outer fuel rings, (2) Dysprosium density and Uranium enrichment both in the central pin, and (3) the same parameters as in the first case but the objective is to obtain a negative checkerboard CVR at beginning of cycle (CBCVR-BOC). To approximate the sensitivity coefficient at EOC, we perform constant-power burnup/depletion calculations for 600 full power days (FPD) using a slightly perturbed nuclear library and the unperturbed neutron fluxes to estimate the variation of nuclide densities at EOC. Sensitivity analyses of CVR and eigenvalue are included in the study. In addition the optimization and adjoint sensitivity techniques are applied to the CBCVR-BOC and keff-EOC adjustment of the ACR lattices with Gadolinium in the central pin. Finally we apply these techniques to the CVR-BOC, CVR-EOC and keff-EOC adjustment of a CANDU lattice of which the burnup period is extended from 300 to 450 FPDs. The cases with the central pin containing either Dysprosium or Gadolinium in the natural Uranium are considered in our study. (Abstract shortened by UMI.)

  13. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach. Thereafter, we focus on a shape optimization problem for an Apollo-like reentry capsule. The optimization seeks to enhance the lift-to-drag ratio of the capsule by modifyjing the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design. This abstract presents only a brief outline of the numerical method and results; full details will be given in the final paper.

  14. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  15. Development of CO2 inversion system based on the adjoint of the global coupled transport model

    NASA Astrophysics Data System (ADS)

    Belikov, Dmitry; Maksyutov, Shamil; Chevallier, Frederic; Kaminski, Thomas; Ganshin, Alexander; Blessing, Simon

    2014-05-01

    We present the development of an inverse modeling system employing an adjoint of the global coupled transport model consisting of the National Institute for Environmental Studies (NIES) Eulerian transport model (TM) and the Lagrangian plume diffusion model (LPDM) FLEXPART. NIES TM is a three-dimensional atmospheric transport model, which solves the continuity equation for a number of atmospheric tracers on a grid spanning the entire globe. Spatial discretization is based on a reduced latitude-longitude grid and a hybrid sigma-isentropic coordinate in the vertical. NIES TM uses a horizontal resolution of 2.5°×2.5°. However, to resolve synoptic-scale tracer distributions and to have the ability to optimize fluxes at resolutions of 0.5° and higher we coupled NIES TM with the Lagrangian model FLEXPART. The Lagrangian component of the forward and adjoint models uses precalculated responses of the observed concentration to the surface fluxes and 3-D concentrations field simulated with the FLEXPART model. NIES TM and FLEXPART are driven by JRA-25/JCDAS reanalysis dataset. Construction of the adjoint of the Lagrangian part is less complicated, as LPDMs calculate the sensitivity of measurements to the surrounding emissions field by tracking a large number of "particles" backwards in time. Developing of the adjoint to Eulerian part was performed with automatic differentiation tool the Transformation of Algorithms in Fortran (TAF) software (http://www.FastOpt.com). This method leads to the discrete adjoint of NIES TM. The main advantage of the discrete adjoint is that the resulting gradients of the numerical cost function are exact, even for nonlinear algorithms. The overall advantages of our method are that: 1. No code modification of Lagrangian model is required, making it applicable to combination of global NIES TM and any Lagrangian model; 2. Once run, the Lagrangian output can be applied to any chemically neutral gas; 3. High-resolution results can be obtained over limited regions close to the monitoring sites (using the LPDM part), and at coarse resolution for the rest of the globe (using the Eulerian part), minimizing aggregation errors and computation cost. The adjoint of the coupled high-resolution Eulerian-Lagrangian model will be incorporated into the PYVAR CO2 variational inverse system (Chevallier et al., 2005). Chevallier, F., Fisher, M., Peylin, P., Serrar, S., Bousquet, P., Bréon, F.-M., Chédin, A., and Ciais, P.: Inferring CO2 sources and sinks from satellite observations: method and application to TOVS data, J. Geophys. Res., 110, D24309, doi:10.1029/2005JD006390, 2005.

  16. A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.

    PubMed

    Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff

    2014-01-01

    Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.

  17. Building A New Kind of Graded-Z Shield for Swift's Burst Alert Telescope

    NASA Technical Reports Server (NTRS)

    Robinson, David W.

    2002-01-01

    The Burst Alert Telescope (BAT) on Swift has a graded-Z Shield that closes out the volume between the coded aperture mask and the Cadmium-Zinc-Telluride (CZT) detector array. The purpose of the 37 kilogram shield is to attenuate gamma rays that have not penetrated the coded aperture mask of the BAT instrument and are therefore a major source of noise on the detector array. Unlike previous shields made from plates and panels, this shield consists of multiple layers of thin metal foils (lead, tantalum, tin, and copper) that are stitched together much like standard multi-layer insulation blankets. The shield sections are fastened around BAT, forming a curtain around the instrument aperture. Strength tests were performed to validate and improve the design, and the shield will be vibration tested along with BAT in late 2002. Practical aspects such as the layup design, methods of manufacture, and testing of this new kind of graded-Z Shield are presented.

  18. Lessons Learned from Inlet Integration Analysis of NASA's Low Boom Flight Demonstrator

    NASA Technical Reports Server (NTRS)

    Friedlander, David; Heath, Christopher; Castner, Ray

    2017-01-01

    In 2016, NASA's Aeronautics Research Mission Directorate announced the New Aviation Horizons Initiative with a goal of designing/building several X-Planes, including a Low Boom Flight Demonstrator (LBFD). That same year, NASA awarded a contract to Lockheed Martin (LM) to advance the LBFD concept through preliminary design. Several configurations of the LBFD aircraft were analyzed by both LM engineers and NASA researchers. This presentation focuses on some of the CFD simulations that were run by NASA Glenn researchers. NASA's FUN3D V13.1 code was used for all adjoint-based grid refinement studies and Spalart-Allmaras turbulence model was used during adaptation. It was found that adjoint-based grid adaptation did not accurately capture inlet performance for high speed top-aft-mounted propulsion.

  19. Linear energy transfer in water phantom within SHIELD-HIT transport code

    NASA Astrophysics Data System (ADS)

    Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.

    2017-02-01

    The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.

  20. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  1. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  2. Thick Galactic Cosmic Radiation Shielding Using Atmospheric Data

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Nurge, Mark A.; Starr, Stanley O.; Koontz, Steven L.

    2013-01-01

    NASA is concerned with protecting astronauts from the effects of galactic cosmic radiation and has expended substantial effort in the development of computer models to predict the shielding obtained from various materials. However, these models were only developed for shields up to about 120 g!cm2 in thickness and have predicted that shields of this thickness are insufficient to provide adequate protection for extended deep space flights. Consequently, effort is underway to extend the range of these models to thicker shields and experimental data is required to help confirm the resulting code. In this paper empirically obtained effective dose measurements from aircraft flights in the atmosphere are used to obtain the radiation shielding function of the earth's atmosphere, a very thick shield. Obtaining this result required solving an inverse problem and the method for solving it is presented. The results are shown to be in agreement with current code in the ranges where they overlap. These results are then checked and used to predict the radiation dosage under thick shields such as planetary regolith and the atmosphere of Venus.

  3. Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators

    NASA Astrophysics Data System (ADS)

    Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.

    2015-12-01

    Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  4. 3D tomographic reconstruction using geometrical models

    NASA Astrophysics Data System (ADS)

    Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.

    1997-04-01

    We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  6. Computer program optimizes design of nuclear radiation shields

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1971-01-01

    Computer program, OPEX 2, determines minimum weight, volume, or cost for shields. Program incorporates improved coding, simplified data input, spherical geometry, and an expanded output. Method is capable of altering dose-thickness relationship when a shield layer has been removed.

  7. Seismic wave-speed structure beneath the metropolitan area of Japan based on adjoint tomography

    NASA Astrophysics Data System (ADS)

    Miyoshi, T.; Obayashi, M.; Tono, Y.; Tsuboi, S.

    2015-12-01

    We have obtained a three-dimensional (3D) model of seismic wave-speed structure beneath the metropolitan area of Japan. We applied the spectral-element method (e.g. Komatitsch and Tromp 1999) and adjoint method (Liu and Tromp 2006) to the broadband seismograms in order to infer the 3D model. We used the travel-time tomography result (Matsubara and Obara 2011) as an initial 3D model and used broadband waveforms recorded at the NIED F-net stations. We selected 147 earthquakes with magnitude of larger than 4.5 from the F-net earthquake catalog and used their bandpass filtered seismograms between 5 and 20 second with a high S/N ratio. The 3D model used for the forward and adjoint simulations is represented as a region of approximately 500 by 450 km in horizontal and 120 km in depth. Minimum period of theoretical waveforms was 4.35 second. For the adjoint inversion, we picked up the windows of the body waves from the observed and theoretical seismograms. We used SPECFEM3D_Cartesian code (e.g. Peter et al. 2011) for the forward and adjoint simulations, and their simulations were implemented by K-computer in RIKEN. Each iteration required about 0.1 million CPU hours at least. The model parameters of Vp and Vs were updated by using the steepest descent method. We obtained the fourth iterative model (M04), which reproduced observed waveforms better than the initial model. The shear wave-speed of M04 was significantly smaller than the initial model at any depth. The model of compressional wave-speed was not improved by inversion because of small alpha kernel values. Acknowledgements: This research was partly supported by MEXT Strategic Program for Innovative Research. We thank to the NIED for providing seismological data.

  8. LPT. Shield test facility (TAN645 and 646). Calibration lab shield ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-645 and -646). Calibration lab shield door. Ralph M. Parsons 1229-17 ANP/GE-6-645-MS-1. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-0645-40-693-107369 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  9. Numerical Computation of Sensitivities and the Adjoint Approach

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.

  10. Shielding calculation and criticality safety analysis of spent fuel transportation cask in research reactors.

    PubMed

    Mohammadi, A; Hassanzadeh, M; Gharib, M

    2016-02-01

    In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Introduction to Adjoint Models

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  12. Adjoint-Based Methodology for Time-Dependent Optimal Control (AMTOC)

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail; Diskin, boris; Nishikawa, Hiroaki

    2012-01-01

    During the five years of this project, the AMTOC team developed an adjoint-based methodology for design and optimization of complex time-dependent flows, implemented AMTOC in a testbed environment, directly assisted in implementation of this methodology in the state-of-the-art NASA's unstructured CFD code FUN3D, and successfully demonstrated applications of this methodology to large-scale optimization of several supersonic and other aerodynamic systems, such as fighter jet, subsonic aircraft, rotorcraft, high-lift, wind-turbine, and flapping-wing configurations. In the course of this project, the AMTOC team has published 13 refereed journal articles, 21 refereed conference papers, and 2 NIA reports. The AMTOC team presented the results of this research at 36 international and national conferences, meeting and seminars, including International Conference on CFD, and numerous AIAA conferences and meetings. Selected publications that include the major results of the AMTOC project are enclosed in this report.

  13. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  14. Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2000-01-01

    An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.

  15. Update on the Code Intercomparison and Benchmark for Muon Fluence and Absorbed Dose Induced by an 18 GeV Electron Beam After Massive Iron Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasso, A.; Ferrari, A.; Ferrari, A.

    In 1974, Nelson, Kase and Svensson published an experimental investigation on muon shielding around SLAC high-energy electron accelerators [1]. They measured muon fluence and absorbed dose induced by 14 and 18 GeV electron beams hitting a copper/water beamdump and attenuated in a thick steel shielding. In their paper, they compared the results with the theoretical models available at that time. In order to compare their experimental results with present model calculations, we use the modern transport Monte Carlo codes MARS15, FLUKA2011 and GEANT4 to model the experimental setup and run simulations. The results are then compared between the codes, andmore » with the SLAC data.« less

  16. Early Results from the Advanced Radiation Protection Thick GCR Shielding Project

    NASA Technical Reports Server (NTRS)

    Norman, Ryan B.; Clowdsley, Martha; Slaba, Tony; Heilbronn, Lawrence; Zeitlin, Cary; Kenny, Sean; Crespo, Luis; Giesy, Daniel; Warner, James; McGirl, Natalie; hide

    2017-01-01

    The Advanced Radiation Protection Thick Galactic Cosmic Ray (GCR) Shielding Project leverages experimental and modeling approaches to validate a predicted minimum in the radiation exposure versus shielding depth curve. Preliminary results of space radiation models indicate that a minimum in the dose equivalent versus aluminum shielding thickness may exist in the 20-30 g/cm2 region. For greater shield thickness, dose equivalent increases due to secondary neutron and light particle production. This result goes against the long held belief in the space radiation shielding community that increasing shielding thickness will decrease risk to crew health. A comprehensive modeling effort was undertaken to verify the preliminary modeling results using multiple Monte Carlo and deterministic space radiation transport codes. These results verified the preliminary findings of a minimum and helped drive the design of the experimental component of the project. In first-of-their-kind experiments performed at the NASA Space Radiation Laboratory, neutrons and light ions were measured between large thicknesses of aluminum shielding. Both an upstream and a downstream shield were incorporated into the experiment to represent the radiation environment inside a spacecraft. These measurements are used to validate the Monte Carlo codes and derive uncertainty distributions for exposure estimates behind thick shielding similar to that provided by spacecraft on a Mars mission. Preliminary results for all aspects of the project will be presented.

  17. Boltzmann Transport Code Update: Parallelization and Integrated Design Updates

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Nealy, J. E.; DeAngelis, G.; Feldman, G. A.; Chokshi, S.

    2003-01-01

    The on going efforts at developing a web site for radiation analysis is expected to result in an increased usage of the High Charge and Energy Transport Code HZETRN. It would be nice to be able to do the requested calculations quickly and efficiently. Therefore the question arose, "Could the implementation of parallel processing speed up the calculations required?" To answer this question two modifications of the HZETRN computer code were created. The first modification selected the shield material of Al(2219) , then polyethylene and then Al(2219). The modified Fortran code was labeled 1SSTRN.F. The second modification considered the shield material of CO2 and Martian regolith. This modified Fortran code was labeled MARSTRN.F.

  18. Adjoint-Based Sensitivity Maps for the Nearshore

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Ngodock, Hans

    2013-04-01

    The wave model SWAN (Booij et al., 1999) solves the spectral action balance equation to produce nearshore wave forecasts and climatologies. It is widely used by the coastal modeling community and is part of a variety of coupled ocean-wave-atmosphere model systems. A variational data assimilation system (Orzech et al., 2013) has recently been developed for SWAN and is presently being transitioned to operational use by the U.S. Naval Oceanographic Office. This system is built around a numerical adjoint to the fully nonlinear, nonstationary SWAN code. When provided with measured or artificial "observed" spectral wave data at a location of interest on a given nearshore bathymetry, the adjoint can compute the degree to which spectral energy levels at other locations are correlated with - or "sensitive" to - variations in the observed spectrum. Adjoint output may be used to construct a sensitivity map for the entire domain, tracking correlations of spectral energy throughout the grid. When access is denied to the actual locations of interest, sensitivity maps can be used to determine optimal alternate locations for data collection by identifying regions of greatest sensitivity in the mapped domain. The present study investigates the properties of adjoint-generated sensitivity maps for nearshore wave spectra. The adjoint and forward SWAN models are first used in an idealized test case at Duck, NC, USA, to demonstrate the system's effectiveness at optimizing forecasts of shallow water wave spectra for an inaccessible surf-zone location. Then a series of simulations is conducted for a variety of different initializing conditions, to examine the effects of seasonal changes in wave climate, errors in bathymetry, and variations in size and shape of the inaccessible region of interest. Model skill is quantified using two methods: (1) a more traditional correlation of observed and modeled spectral statistics such as significant wave height, and (2) a recently developed RMS spectral skill score summed over all frequency-directional bins. The relative advantages and disadvantages of these two methods are considered. References: Booij, N., R.C. Ris, and L.H. Holthuijsen, 1999: A third-generation wave model for coastal regions: 1. Model description and validation. J. Geophys. Res. 104 (C4), 7649-7666. Orzech, M.D., J. Veeramony, and H.E. Ngodock, 2013: A variational assimilation system for nearshore wave modeling. J. Atm. & Oc. Tech., in press.

  19. Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, J E; Fratoni, M; Kramer, K J

    A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less

  20. Optimal shielding thickness for galactic cosmic ray environments

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Bahadori, Amir A.; Reddell, Brandon D.; Singleterry, Robert C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2017-02-01

    Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20 g/cm2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20 g/cm2. The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail.

  1. Optimal shielding thickness for galactic cosmic ray environments.

    PubMed

    Slaba, Tony C; Bahadori, Amir A; Reddell, Brandon D; Singleterry, Robert C; Clowdsley, Martha S; Blattnig, Steve R

    2017-02-01

    Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20g/cm 2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20g/cm 2 . The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail. Published by Elsevier Ltd.

  2. Shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses using WinXCom and MCNP5 code

    NASA Astrophysics Data System (ADS)

    Dong, M. G.; El-Mallawany, R.; Sayyed, M. I.; Tekin, H. O.

    2017-12-01

    Gamma ray shielding properties of 80TeO2-5TiO2-(15-x) WO3-xAnOm glasses, where AnOm is Nb2O5 = 0.01, 5, Nd2O3 = 3, 5 and Er2O3 = 5 mol% have been achieved. Shielding parameters; mass attenuation coefficients, half value layers, and macroscopic effective removal cross section for fast neutrons have been computed by using WinXCom program and MCNP5 Monte Carlo code. In addition, by using Geometric Progression method (G-P), exposure buildup factor values were also calculated. Variations of shielding parameters are discussed for the effect of REO addition into the glasses and photon energy.

  3. GRAYSKY-A new gamma-ray skyshine code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less

  4. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  5. Neutron Deep Penetration Calculations in Light Water with Monte Carlo TRIPOLI-4® Variance Reduction Techniques

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Kang

    2017-09-01

    Nuclear decommissioning takes place in several stages due to the radioactivity in the reactor structure materials. A good estimation of the neutron activation products distributed in the reactor structure materials impacts obviously on the decommissioning planning and the low-level radioactive waste management. Continuous energy Monte-Carlo radiation transport code TRIPOLI-4 has been applied on radiation protection and shielding analyses. To enhance the TRIPOLI-4 application in nuclear decommissioning activities, both experimental and computational benchmarks are being performed. To calculate the neutron activation of the shielding and structure materials of nuclear facilities, the knowledge of 3D neutron flux map and energy spectra must be first investigated. To perform this type of neutron deep penetration calculations with the Monte Carlo transport code, variance reduction techniques are necessary in order to reduce the uncertainty of the neutron activation estimation. In this study, variance reduction options of the TRIPOLI-4 code were used on the NAIADE 1 light water shielding benchmark. This benchmark document is available from the OECD/NEA SINBAD shielding benchmark database. From this benchmark database, a simplified NAIADE 1 water shielding model was first proposed in this work in order to make the code validation easier. Determination of the fission neutron transport was performed in light water for penetration up to 50 cm for fast neutrons and up to about 180 cm for thermal neutrons. Measurement and calculation results were benchmarked. Variance reduction options and their performance were discussed and compared.

  6. The specific purpose Monte Carlo code McENL for simulating the response of epithermal neutron lifetime well logging tools

    NASA Astrophysics Data System (ADS)

    Prettyman, T. H.; Gardner, R. P.; Verghese, K.

    1993-08-01

    A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.

  7. Micrometeoroid and Orbital Debris Threat Assessment: Mars Sample Return Earth Entry Vehicle

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Hyde, James L.; Bjorkman, Michael D.; Hoffman, Kevin D.; Lear, Dana M.; Prior, Thomas G.

    2011-01-01

    This report provides results of a Micrometeoroid and Orbital Debris (MMOD) risk assessment of the Mars Sample Return Earth Entry Vehicle (MSR EEV). The assessment was performed using standard risk assessment methodology illustrated in Figure 1-1. Central to the process is the Bumper risk assessment code (Figure 1-2), which calculates the critical penetration risk based on geometry, shielding configurations and flight parameters. The assessment process begins by building a finite element model (FEM) of the spacecraft, which defines the size and shape of the spacecraft as well as the locations of the various shielding configurations. This model is built using the NX I-deas software package from Siemens PLM Software. The FEM is constructed using triangular and quadrilateral elements that define the outer shell of the spacecraft. Bumper-II uses the model file to determine the geometry of the spacecraft for the analysis. The next step of the process is to identify the ballistic limit characteristics for the various shield types. These ballistic limits define the critical size particle that will penetrate a shield at a given impact angle and impact velocity. When the finite element model is built, each individual element is assigned a property identifier (PID) to act as an index for its shielding properties. Using the ballistic limit equations (BLEs) built into the Bumper-II code, the shield characteristics are defined for each and every PID in the model. The final stage of the analysis is to determine the probability of no penetration (PNP) on the spacecraft. This is done using the micrometeoroid and orbital debris environment definitions that are built into the Bumper-II code. These engineering models take into account orbit inclination, altitude, attitude and analysis date in order to predict an impacting particle flux on the spacecraft. Using the geometry and shielding characteristics previously defined for the spacecraft and combining that information with the environment model calculations, the Bumper-II code calculates a probability of no penetration for the spacecraft.

  8. Two-dimensional over-all neutronics analysis of the ITER device

    NASA Astrophysics Data System (ADS)

    Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi

    1993-07-01

    The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.

  9. Solar Proton Transport Within an ICRU Sphere Surrounded by a Complex Shield: Ray-trace Geometry

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Wilson, John W.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z is less than or equal to 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency.

  10. Comparison of SPHC Hydrocode Results with Penetration Equations and Results of Other Codes

    NASA Technical Reports Server (NTRS)

    Evans, Steven W.; Stallworth, Roderick; Stellingwerf, Robert F.

    2004-01-01

    The SPHC hydrodynamic code was used to simulate impacts of spherical aluminum projectiles on a single-wall aluminum plate and on a generic Whipple shield. Simulations were carried out in two and three dimensions. Projectile speeds ranged from 2 kilometers per second to 10 kilometers per second for the single-wall runs, and from 3 kilometers per second to 40 kilometers per second for the Whipple shield runs. Spallation limit results of the single-wall simulations are compared with predictions from five standard penetration equations, and are shown to fall comfortably within the envelope of these analytical relations. Ballistic limit results of the Whipple shield simulations are compared with results from the AUTODYN-2D and PAM-SHOCK-3D codes presented in a paper at the Hypervelocity Impact Symposium 2000 and the Christiansen formulation of 2003.

  11. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  12. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  13. Standardized Radiation Shield Design Methods: 2005 HZETRN

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.

    2006-01-01

    Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.

  14. Adjoint affine fusion and tadpoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urichuk, Andrew, E-mail: andrew.urichuk@uleth.ca; Walton, Mark A., E-mail: walton@uleth.ca; International School for Advanced Studies

    2016-06-15

    We study affine fusion with the adjoint representation. For simple Lie algebras, elementary and universal formulas determine the decomposition of a tensor product of an integrable highest-weight representation with the adjoint representation. Using the (refined) affine depth rule, we prove that equally striking results apply to adjoint affine fusion. For diagonal fusion, a coefficient equals the number of nonzero Dynkin labels of the relevant affine highest weight, minus 1. A nice lattice-polytope interpretation follows and allows the straightforward calculation of the genus-1 1-point adjoint Verlinde dimension, the adjoint affine fusion tadpole. Explicit formulas, (piecewise) polynomial in the level, are writtenmore » for the adjoint tadpoles of all classical Lie algebras. We show that off-diagonal adjoint affine fusion is obtained from the corresponding tensor product by simply dropping non-dominant representations.« less

  15. Comparison of Radiation Transport Codes, HZETRN, HETC and FLUKA, Using the 1956 Webber SPE Spectrum

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Slaba, Tony C.; Blattnig, Steve R.; Tripathi, Ram K.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; Reddell, Brandon; Clowdsley, Martha S.; hide

    2009-01-01

    Protection of astronauts and instrumentation from galactic cosmic rays (GCR) and solar particle events (SPE) in the harsh environment of space is of prime importance in the design of personal shielding, spacec raft, and mission planning. Early entry of radiation constraints into the design process enables optimal shielding strategies, but demands efficient and accurate tools that can be used by design engineers in every phase of an evolving space project. The radiation transport code , HZETRN, is an efficient tool for analyzing the shielding effectiveness of materials exposed to space radiation. In this paper, HZETRN is compared to the Monte Carlo codes HETC-HEDS and FLUKA, for a shield/target configuration comprised of a 20 g/sq cm Aluminum slab in front of a 30 g/cm^2 slab of water exposed to the February 1956 SPE, as mode led by the Webber spectrum. Neutron and proton fluence spectra, as well as dose and dose equivalent values, are compared at various depths in the water target. This study shows that there are many regions where HZETRN agrees with both HETC-HEDS and FLUKA for this shield/target configuration and the SPE environment. However, there are also regions where there are appreciable differences between the three computer c odes.

  16. Applicability of a Bonner Shere technique for pulsed neutron in 120 GeV proton facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanami, T.; Hagiwara, M.; Iwase, H.

    2008-02-01

    The data on neutron spectra and intensity behind shielding are important for radiation safety design of high-energy accelerators since neutrons are capable of penetrating thick shielding and activating materials. Corresponding particle transport codes--that involve physics models of neutron and other particle production, transportation, and interaction--have been developed and used world-wide [1-8]. The results of these codes have been ensured through plenty of comparisons with experimental results taken in simple geometries. For neutron generation and transport, several related experiments have been performed to measure neutron spectra, attenuation length and reaction rates behind shielding walls of various thicknesses and materials in energymore » range up to several hundred of MeV [9-11]. The data have been used to benchmark--and modify if needed--the simulation modes and parameters in the codes, as well as the reference data for radiation safety design. To obtain such kind of data above several hundred of MeV, Japan-Fermi National Accelerator Laboratory (FNAL) collaboration for shielding experiments has been started in 2007, based on suggestion from the specialist meeting of shielding, Shielding Aspects of Target, Irradiation Facilities (SATIF), because of very limited data available in high-energy region (see, for example, [12]). As a part of this shielding experiment, a set of Bonner sphere (BS) was tested at the antiproton production target facility (pbar target station) at FNAL to obtain neutron spectra induced by a 120-GeV proton beam in concrete and iron shielding. Generally, utilization of an active detector around high-energy accelerators requires an improvement on its readout to overcome burst of secondary radiation since the accelerator delivers an intense beam to a target in a short period after relatively long acceleration period. In this paper, we employ BS for a spectrum measurement of neutrons that penetrate the shielding wall of the pbar target station in FNAL.« less

  17. A new mathematical adjoint for the modified SAAF -SN equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Wang, Yaqi; Martineau, Richard

    2015-01-01

    We present a new adjoint FEM weak form, which can be directly used for evaluating the mathematical adjoint, suitable for perturbation calculations, of the self-adjoint angular flux SN equations (SAAF -SN) without construction and transposition of the underlying coefficient matrix. Stabilization schemes incorporated in the described SAAF -SN method make the mathematical adjoint distinct from the physical adjoint, i.e. the solution of the continuous adjoint equation with SAAF -SN . This weak form is implemented into RattleSnake, the MOOSE (Multiphysics Object-Oriented Simulation Environment) based transport solver. Numerical results verify the correctness of the implementation and show its utility both formore » fixed source and eigenvalue problems.« less

  18. Extension of the ADjoint Approach to a Laminar Navier-Stokes Solver

    NASA Astrophysics Data System (ADS)

    Paige, Cody

    The use of adjoint methods is common in computational fluid dynamics to reduce the cost of the sensitivity analysis in an optimization cycle. The forward mode ADjoint is a combination of an adjoint sensitivity analysis method with a forward mode automatic differentiation (AD) and is a modification of the reverse mode ADjoint method proposed by Mader et al.[1]. A colouring acceleration technique is presented to reduce the computational cost increase associated with forward mode AD. The forward mode AD facilitates the implementation of the laminar Navier-Stokes (NS) equations. The forward mode ADjoint method is applied to a three-dimensional computational fluid dynamics solver. The resulting Euler and viscous ADjoint sensitivities are compared to the reverse mode Euler ADjoint derivatives and a complex-step method to demonstrate the reduced computational cost and accuracy. Both comparisons demonstrate the benefits of the colouring method and the practicality of using a forward mode AD. [1] Mader, C.A., Martins, J.R.R.A., Alonso, J.J., and van der Weide, E. (2008) ADjoint: An approach for the rapid development of discrete adjoint solvers. AIAA Journal, 46(4):863-873. doi:10.2514/1.29123.

  19. Discrete adjoint of fractional step Navier-Stokes solver in generalized coordinates

    NASA Astrophysics Data System (ADS)

    Wang, Mengze; Mons, Vincent; Zaki, Tamer

    2017-11-01

    Optimization and control in transitional and turbulent flows require evaluation of gradients of the flow state with respect to the problem parameters. Using adjoint approaches, these high-dimensional gradients can be evaluated with a similar computational cost as the forward Navier-Stokes simulations. The adjoint algorithm can be obtained by discretizing the continuous adjoint Navier-Stokes equations or by deriving the adjoint to the discretized Navier-Stokes equations directly. The latter algorithm is necessary when the forward-adjoint relations must be satisfied to machine precision. In this work, our forward model is the fractional step solution to the Navier-Stokes equations in generalized coordinates, proposed by Rosenfeld, Kwak & Vinokur. We derive the corresponding discrete adjoint equations. We also demonstrate the accuracy of the combined forward-adjoint model, and its application to unsteady wall-bounded flows. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542).

  20. Regularized wave equation migration for imaging and data reconstruction

    NASA Astrophysics Data System (ADS)

    Kaplan, Sam T.

    The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.

  1. IET. Periscope shielding and installation details. Shows range of scanning ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Periscope shielding and installation details. Shows range of scanning head, removable concrete cap, concrete shielding. Ralph M. Parsons 902-4-ANP-620-A 324. Date: February 1954. Approved by INEEL Classification Office for public release. INEEL Index code no. 035-0620-00-693-106909 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  2. Solar Proton Transport within an ICRU Sphere Surrounded by a Complex Shield: Combinatorial Geometry

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    The 3DHZETRN code, with improved neutron and light ion (Z (is) less than 2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency.

  3. Double-layer neutron shield design as neutron shielding application

    NASA Astrophysics Data System (ADS)

    Sariyer, Demet; Küçer, Rahmi

    2018-02-01

    The shield design in particle accelerators and other high energy facilities are mainly connected to the high-energy neutrons. The deep penetration of neutrons through massive shield has become a very serious problem. For shielding to be efficient, most of these neutrons should be confined to the shielding volume. If the interior space will become limited, the sufficient thickness of multilayer shield must be used. Concrete and iron are widely used as a multilayer shield material. Two layers shield material was selected to guarantee radiation safety outside of the shield against neutrons generated in the interaction of the different proton energies. One of them was one meter of concrete, the other was iron-contained material (FeB, Fe2B and stainless-steel) to be determined shield thicknesses. FLUKA Monte Carlo code was used for shield design geometry and required neutron dose distributions. The resulting two layered shields are shown better performance than single used concrete, thus the shield design could leave more space in the interior shielded areas.

  4. The discrete adjoint method for parameter identification in multibody system dynamics.

    PubMed

    Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin

    2018-01-01

    The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.

  5. SHIELD and HZETRN comparisons of pion production cross sections

    NASA Astrophysics Data System (ADS)

    Norbury, John W.; Sobolevsky, Nikolai; Werneth, Charles M.

    2018-03-01

    A program of comparing American (NASA) and Russian (ROSCOSMOS) space radiation transport codes has recently begun, and the first paper directly comparing the NASA and ROSCOSMOS space radiation transport codes, HZETRN and SHIELD respectively has recently appeared. The present work represents the second time that NASA and ROSCOSMOS calculations have been directly compared, and the focus here is on models of pion production cross sections used in the two transport codes mentioned above. It was found that these models are in overall moderate agreement with each other and with experimental data. Disagreements that were found are discussed.

  6. Neutron flux measurements on a mock-up of a storage cask for high-level nuclear waste using 2.5 MeV neutrons.

    PubMed

    Suárez, H Saurí; Becker, F; Klix, A; Pang, B; Döring, T

    2018-06-07

    To store and dispose spent nuclear fuel, shielding casks are employed to reduce the emitted radiation. To evaluate the exposure of employees handling such casks, Monte Carlo radiation transport codes can be employed. Nevertheless, to assess the reliability of these codes and nuclear data, experimental checks are required. In this study, a neutron generator (NG) producing neutrons of 2.5 MeV was employed to simulate neutrons produced in spent nuclear fuel. Different configurations of shielding layers of steel and polyethylene were positioned between the target of the NG and a NE-213 detector. The results of the measurements of neutron and γ radiation and the corresponding simulations with the code MCNP6 are presented. Details of the experimental set-up as well as neutron and photon flux spectra are provided as reference points for such NG investigations with shielding structures.

  7. CEM2k and LAQGSM Codes as Event-Generators for Space Radiation Shield and Cosmic Rays Propagation Applications

    NASA Technical Reports Server (NTRS)

    Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.

    2002-01-01

    Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.

  8. Advanced Variance Reduction Strategies for Optimizing Mesh Tallies in MAVRIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Blakeman, Edward D; Wagner, John C

    2007-01-01

    More often than in the past, Monte Carlo methods are being used to compute fluxes or doses over large areas using mesh tallies (a set of region tallies defined on a mesh that overlays the geometry). For problems that demand that the uncertainty in each mesh cell be less than some set maximum, computation time is controlled by the cell with the largest uncertainty. This issue becomes quite troublesome in deep-penetration problems, and advanced variance reduction techniques are required to obtain reasonable uncertainties over large areas. The CADIS (Consistent Adjoint Driven Importance Sampling) methodology has been shown to very efficientlymore » optimize the calculation of a response (flux or dose) for a single point or a small region using weight windows and a biased source based on the adjoint of that response. This has been incorporated into codes such as ADVANTG (based on MCNP) and the new sequence MAVRIC, which will be available in the next release of SCALE. In an effort to compute lower uncertainties everywhere in the problem, Larsen's group has also developed several methods to help distribute particles more evenly, based on forward estimates of flux. This paper focuses on the use of a forward estimate to weight the placement of the source in the adjoint calculation used by CADIS, which we refer to as a forward-weighted CADIS (FW-CADIS).« less

  9. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2013-06-24

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Hoist, Submitted for publication . • Finite...1996. [20] C. LANCZOS, Linear Differential Operators, Dover Publications , Mineola, NY, 1997. [21] G.I. MARCHUK, Adjoint Equations and Analysis of...NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection of information is

  10. Nuclear thermal propulsion engine system design analysis code development

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.

    1992-01-01

    A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.

  11. Boundary-Layer Stability Analysis of the Mean Flows Obtained Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liao, Wei; Malik, Mujeeb R.; Lee-Rausch, Elizabeth M.; Li, Fei; Nielsen, Eric J.; Buning, Pieter G.; Chang, Chau-Lyan; Choudhari, Meelan M.

    2012-01-01

    Boundary-layer stability analyses of mean flows extracted from unstructured-grid Navier- Stokes solutions have been performed. A procedure has been developed to extract mean flow profiles from the FUN3D unstructured-grid solutions. Extensive code-to-code validations have been performed by comparing the extracted mean ows as well as the corresponding stability characteristics to the predictions based on structured-grid solutions. Comparisons are made on a range of problems from a simple at plate to a full aircraft configuration-a modified Gulfstream-III with a natural laminar flow glove. The future aim of the project is to extend the adjoint-based design capability in FUN3D to include natural laminar flow and laminar flow control by integrating it with boundary-layer stability analysis codes, such as LASTRAC.

  12. FAST TRACK COMMUNICATION Quasi self-adjoint nonlinear wave equations

    NASA Astrophysics Data System (ADS)

    Ibragimov, N. H.; Torrisi, M.; Tracinà, R.

    2010-11-01

    In this paper we generalize the classification of self-adjoint second-order linear partial differential equation to a family of nonlinear wave equations with two independent variables. We find a class of quasi self-adjoint nonlinear equations which includes the self-adjoint linear equations as a particular case. The property of a differential equation to be quasi self-adjoint is important, e.g. for constructing conservation laws associated with symmetries of the differential equation.

  13. Remanent Activation in the Mini-SHINE Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micklich, Bradley J.

    2015-04-16

    Argonne National Laboratory is assisting SHINE Medical Technologies in developing a domestic source of the medical isotope 99Mo through the fission of low-enrichment uranium in a uranyl sulfate solution. In Phase 2 of these experiments, electrons from a linear accelerator create neutrons by interacting in a depleted uranium target, and these neutrons are used to irradiate the solution. The resulting neutron and photon radiation activates the target, the solution vessels, and a shielded cell that surrounds the experimental apparatus. When the experimental campaign is complete, the target must be removed into a shielding cask, and the experimental components must bemore » disassembled. The radiation transport code MCNPX and the transmutation code CINDER were used to calculate the radionuclide inventories of the solution, the target assembly, and the shielded cell, and to determine the dose rates and shielding requirements for selected removal scenarios for the target assembly and the solution vessels.« less

  14. Radiation protection for human missions to the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.

    1991-01-01

    Radiation protection assessments are performed for advanced Lunar and Mars manned missions. The Langley cosmic ray transport code and the nucleon transport code are used to quantify the transport and attenuation of galactic cosmic rays and solar proton flares through various shielding media. Galactic cosmic radiation at solar maximum and minimum, as well as various flare scenarios are considered. Propagation data for water, aluminum, liquid hydrogen, lithium hydride, lead, and lunar and Martian regolith (soil) are included. Shield thickness and shield mass estimates required to maintain incurred doses below 30 day and annual limits (as set for Space Station Freedom and used as a guide for space exploration) are determined for simple geometry transfer vehicles. On the surface of Mars, dose estimates are presented for crews with their only protection being the carbon dioxide atmosphere and for crews protected by shielding provided by Martian regolith for a candidate habitat.

  15. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  16. Radiological Shielding Design for the Neutron High-Resolution Backscattering Spectrometer EMU at the OPAL Reactor

    NASA Astrophysics Data System (ADS)

    Ersez, Tunay; Esposto, Fernando; Souza, Nicolas R. de

    2017-09-01

    The shielding for the neutron high-resolution backscattering spectrometer (EMU) located at the OPAL reactor (ANSTO) was designed using the Monte Carlo code MCNP 5-1.60. The proposed shielding design has produced compact shielding assemblies, such as the neutron pre-monochromator bunker with sliding cylindrical block shields to accommodate a range of neutron take-off angles, and in the experimental area - shielding of neutron focusing guides, choppers, flight tube, backscattering monochromator, and additional shielding elements inside the Scattering Tank. These shielding assemblies meet safety and engineering requirements and cost constraints. The neutron dose rates around the EMU instrument were reduced to < 0.5 µSv/h and the gamma dose rates to a safe working level of ≤ 3 µSv/h.

  17. Deep Space Test Bed for Radiation Studies

    NASA Technical Reports Server (NTRS)

    Adams, James H.; Christl, Mark; Watts, John; Kuznetsov, Eugene; Lin, Zi-Wei

    2006-01-01

    A key factor affecting the technical feasibility and cost of missions to Mars or the Moon is the need to protect the crew from ionizing radiation in space. Some analyses indicate that large amounts of spacecraft shielding may be necessary for crew safety. The shielding requirements are driven by the need to protect the crew from Galactic cosmic rays (GCR). Recent research activities aimed at enabling manned exploration have included shielding materials studies. A major goal of this research is to develop accurate radiation transport codes to calculate the shielding effectiveness of materials and to develop effective shielding strategies for spacecraft design. Validation of these models and calculations must be addressed in a relevant radiation environment to assure their technical readiness and accuracy. Test data obtained in the deep space radiation environment can provide definitive benchmarks and yield uncertainty estimates of the radiation transport codes. The two approaches presently used for code validation are ground based testing at particle accelerators and flight tests in high-inclination low-earth orbits provided by the shuttle, free-flyer platforms, or polar-orbiting satellites. These approaches have limitations in addressing all the radiation-shielding issues of deep space missions in both technical and practical areas. An approach based on long duration high altitude polar balloon flights provides exposure to the galactic cosmic ray composition and spectra encountered in deep space at a lower cost and with easier and more frequent access than afforded with spaceflight opportunities. This approach also results in shorter development times than spaceflight experiments, which is important for addressing changing program goals and requirements.

  18. Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2014-01-01

    Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.

  19. Electron Cyclotron Current Drive Efficiency in General Tokamak Geometry and Its Application to Advanced Tokamak Plasmas

    NASA Astrophysics Data System (ADS)

    Lin-Liu, Y. R.; Chan, V. S.; Luce, T. C.; Prater, R.

    1998-11-01

    Owing to relativistic mass shift in the cyclotron resonance condition, a simple and accurate interpolation formula for estimating the current drive efficiency, such as those(S.C. Chiu et al.), Nucl. Fusion 29, 2175 (1989).^,(D.A. Ehst and C.F.F. Karney, Nucl. Fusion 31), 1933 (1991). commonly used in FWCD, is not available in the case of ECCD. In this work, we model ECCD using the adjoint techniques. A semi-analytic adjoint function appropriate for general tokamak geometry is obtained using Fisch's relativistic collision model. Predictions of off-axis ECCD qualitatively and semi-quantitatively agrees with those of Cohen,(R.H. Cohen, Phys. Fluids 30), 2442 (1987). currently implemented in the raytracing code TORAY. The dependences of the current drive efficiency on the wave launch configuration and the plasma parameters will be presented. Strong absorption of the wave away from the resonance layer is shown to be an important factor in optimizing the off-axis ECCD for application to advanced tokamak operations.

  20. Magnetic shield for turbomolecular pump of the Magnetized Plasma Linear Experimental device at Saha Institute of Nuclear Physics.

    PubMed

    Biswas, Subir; Chattopadhyay, Monobir; Pal, Rabindranath

    2011-01-01

    The turbo molecular pump of the Magnetized Plasma Linear Experimental device is protected from damage by a magnetic shield. As the pump runs continuously in a magnetic field environment during a plasma physics experiment, it may get damaged owing to eddy current effect. For design and testing of the shield, first we simulate in details various aspects of magnetic shield layouts using a readily available field design code. The performance of the shield made from two half cylinders of soft iron material, is experimentally observed to agree very well with the simulation results.

  1. Poster - 28: Shielding of X-ray Rooms in Ontario in the Absence of Best Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frimeth, Jeff; Richer, Jeff; Nesbitt, James

    This poster will be strictly based on the Healing Arts Radiation Protection (HARP) Act, Regulation 543 under this Act (X-ray Safety Code), and personal communication the presenting author has had. In Ontario, the process of approval of an X-ray machine installation by the Director of the X-ray Inspection Service (XRIS) follows a certain protocol. Initially, the applicant submits a series of forms, including recommended shielding amounts, in order to satisfy the law. This documentation is then transferred to a third-party vendor (i.e. a professional engineer – P.Eng.) outsourced by the Ministry of Health and Long-term Care (MOHLTC). The P.Eng. thenmore » evaluates the submitted documentation for appropriate fulfillment of the HARP Act and Reg. 543 requirements. If the P.Eng.’s evaluation of the documentation is to their satisfaction, the XRIS is then notified. Finally, the Director will then issue a letter of approval to install the equipment at the facility. The methodology required to be used by the P.Eng. in order to determine the required amounts of protective barriers, and recommended to be used by the applicant, is contained within Safety Code 20A. However, Safety Code 35 has replaced the obsolete Safety Code 20A document and employs best practices in shielding design. This talk will focus further on specific intentions and limitations of Safety Code 20A. Furthermore, this talk will discuss the definition of the “practice of professional engineering” in Ontario. COMP members who are involved in shielding design are strongly encouraged to attend.« less

  2. Adjoint sensitivity analysis of chaotic dynamical systems with non-intrusive least squares shadowing

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.

    2017-11-01

    This paper presents a discrete adjoint version of the recently developed non-intrusive least squares shadowing (NILSS) algorithm, which circumvents the instability that conventional adjoint methods encounter for chaotic systems. The NILSS approach involves solving a smaller minimization problem than other shadowing approaches and can be implemented with only minor modifications to preexisting tangent and adjoint solvers. Adjoint NILSS is demonstrated on a small chaotic ODE, a one-dimensional scalar PDE, and a direct numerical simulation (DNS) of the minimal flow unit, a turbulent channel flow on a small spatial domain. This is the first application of an adjoint shadowing-based algorithm to a three-dimensional turbulent flow.

  3. Radiation production and absorption in human spacecraft shielding systems under high charge and energy Galactic Cosmic Rays: Material medium, shielding depth, and byproduct aspects

    NASA Astrophysics Data System (ADS)

    Barthel, Joseph; Sarigul-Klijn, Nesrin

    2018-03-01

    Deep space missions such as the planned 2025 mission to asteroids require spacecraft shields to protect electronics and humans from adverse effects caused by the space radiation environment, primarily Galactic Cosmic Rays. This paper first reviews the theory on how these rays of charged particles interact with matter, and then presents a simulation for a 500 day Mars flyby mission using a deterministic based computer code. High density polyethylene and aluminum shielding materials at a solar minimum are considered. Plots of effective dose with varying shield depth, charged particle flux, and dose in silicon and human tissue behind shielding are presented.

  4. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  5. Neutron streaming studies along JET shielding penetrations

    NASA Astrophysics Data System (ADS)

    Stamatelatos, Ion E.; Vasilopoulou, Theodora; Batistoni, Paola; Obryk, Barbara; Popovichev, Sergey; Naish, Jonathan

    2017-09-01

    Neutronic benchmark experiments are carried out at JET aiming to assess the neutronic codes and data used in ITER analysis. Among other activities, experiments are performed in order to validate neutron streaming simulations along long penetrations in the JET shielding configuration. In this work, neutron streaming calculations along the JET personnel entrance maze are presented. Simulations were performed using the MCNP code for Deuterium-Deuterium and Deuterium- Tritium plasma sources. The results of the simulations were compared against experimental data obtained using thermoluminescence detectors and activation foils.

  6. Design of the radiation shielding for the time of flight enhanced diagnostics neutron spectrometer at Experimental Advanced Superconducting Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, T. F.; Chen, Z. J.; Peng, X. Y.

    A radiation shielding has been designed to reduce scattered neutrons and background gamma-rays for the new double-ring Time Of Flight Enhanced Diagnostics (TOFED). The shielding was designed based on simulation with the Monte Carlo code MCNP5. Dedicated model of the EAST tokamak has been developed together with the emission neutron source profile and spectrum; the latter were simulated with the Nubeam and GENESIS codes. Significant reduction of background radiation at the detector can be achieved and this satisfies the requirement of TOFED. The intensities of the scattered and direct neutrons in the line of sight of the TOFED neutron spectrometermore » at EAST are studied for future data interpretation.« less

  7. Determination and Fabrication of New Shield Super Alloys Materials for Nuclear Reactor Safety by Experiments and Cern-Fluka Monte Carlo Simulation Code, Geant4 and WinXCom

    NASA Astrophysics Data System (ADS)

    Aygun, Bünyamin; Korkut, Turgay; Karabulut, Abdulhalik

    2016-05-01

    Despite the possibility of depletion of fossil fuels increasing energy needs the use of radiation tends to increase. Recently the security-focused debate about planned nuclear power plants still continues. The objective of this thesis is to prevent the radiation spread from nuclear reactors into the environment. In order to do this, we produced higher performanced of new shielding materials which are high radiation holders in reactors operation. Some additives used in new shielding materials; some of iron (Fe), rhenium (Re), nickel (Ni), chromium (Cr), boron (B), copper (Cu), tungsten (W), tantalum (Ta), boron carbide (B4C). The results of this experiments indicated that these materials are good shields against gamma and neutrons. The powder metallurgy technique was used to produce new shielding materials. CERN - FLUKA Geant4 Monte Carlo simulation code and WinXCom were used for determination of the percentages of high temperature resistant and high-level fast neutron and gamma shielding materials participated components. Super alloys was produced and then the experimental fast neutron dose equivalent measurements and gamma radiation absorpsion of the new shielding materials were carried out. The produced products to be used safely reactors not only in nuclear medicine, in the treatment room, for the storage of nuclear waste, nuclear research laboratories, against cosmic radiation in space vehicles and has the qualities.

  8. JASMIN: Japanese-American study of muon interactions and neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Hiroshi; /JAEA, Ibaraki; Mokhov, N.V.

    Experimental studies of shielding and radiation effects at Fermi National Accelerator Laboratory (FNAL) have been carried out under collaboration between FNAL and Japan, aiming at benchmarking of simulation codes and study of irradiation effects for upgrade and design of new high-energy accelerator facilities. The purposes of this collaboration are (1) acquisition of shielding data in a proton beam energy domain above 100GeV; (2) further evaluation of predictive accuracy of the PHITS and MARS codes; (3) modification of physics models and data in these codes if needed; (4) establishment of irradiation field for radiation effect tests; and (5) development of amore » code module for improved description of radiation effects. A series of experiments has been performed at the Pbar target station and NuMI facility, using irradiation of targets with 120 GeV protons for antiproton and neutrino production, as well as the M-test beam line (M-test) for measuring nuclear data and detector responses. Various nuclear and shielding data have been measured by activation methods with chemical separation techniques as well as by other detectors such as a Bonner ball counter. Analyses with the experimental data are in progress for benchmarking the PHITS and MARS15 codes. In this presentation recent activities and results are reviewed.« less

  9. Adjoint optimization of natural convection problems: differentially heated cavity

    NASA Astrophysics Data System (ADS)

    Saglietti, Clio; Schlatter, Philipp; Monokrousos, Antonios; Henningson, Dan S.

    2017-12-01

    Optimization of natural convection-driven flows may provide significant improvements to the performance of cooling devices, but a theoretical investigation of such flows has been rarely done. The present paper illustrates an efficient gradient-based optimization method for analyzing such systems. We consider numerically the natural convection-driven flow in a differentially heated cavity with three Prandtl numbers (Pr=0.15{-}7) at super-critical conditions. All results and implementations were done with the spectral element code Nek5000. The flow is analyzed using linear direct and adjoint computations about a nonlinear base flow, extracting in particular optimal initial conditions using power iteration and the solution of the full adjoint direct eigenproblem. The cost function for both temperature and velocity is based on the kinetic energy and the concept of entransy, which yields a quadratic functional. Results are presented as a function of Prandtl number, time horizons and weights between kinetic energy and entransy. In particular, it is shown that the maximum transient growth is achieved at time horizons on the order of 5 time units for all cases, whereas for larger time horizons the adjoint mode is recovered as optimal initial condition. For smaller time horizons, the influence of the weights leads either to a concentric temperature distribution or to an initial condition pattern that opposes the mean shear and grows according to the Orr mechanism. For specific cases, it could also been shown that the computation of optimal initial conditions leads to a degenerate problem, with a potential loss of symmetry. In these situations, it turns out that any initial condition lying in a specific span of the eigenfunctions will yield exactly the same transient amplification. As a consequence, the power iteration converges very slowly and fails to extract all possible optimal initial conditions. According to the authors' knowledge, this behavior is illustrated here for the first time.

  10. Spacecraft Solar Particle Event (SPE) Shielding: Shielding Effectiveness as a Function of SPE model as Determined with the FLUKA Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Koontz, Steve; Atwell, William; Reddell, Brandon; Rojdev, Kristina

    2010-01-01

    Analysis of both satellite and surface neutron monitor data demonstrate that the widely utilized Exponential model of solar particle event (SPE) proton kinetic energy spectra can seriously underestimate SPE proton flux, especially at the highest kinetic energies. The more recently developed Band model produces better agreement with neutron monitor data ground level events (GLEs) and is believed to be considerably more accurate at high kinetic energies. Here, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event environments (SEE) behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i. e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations have fully three dimensions with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. The effects are reported for both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. Our results, in agreement with previous studies, show that use of the Exponential form of the event

  11. Physical basis of radiation protection in space travel

    NASA Astrophysics Data System (ADS)

    Durante, Marco; Cucinotta, Francis A.

    2011-10-01

    The health risks of space radiation are arguably the most serious challenge to space exploration, possibly preventing these missions due to safety concerns or increasing their costs to amounts beyond what would be acceptable. Radiation in space is substantially different from Earth: high-energy (E) and charge (Z) particles (HZE) provide the main contribution to the equivalent dose in deep space, whereas γ rays and low-energy α particles are major contributors on Earth. This difference causes a high uncertainty on the estimated radiation health risk (including cancer and noncancer effects), and makes protection extremely difficult. In fact, shielding is very difficult in space: the very high energy of the cosmic rays and the severe mass constraints in spaceflight represent a serious hindrance to effective shielding. Here the physical basis of space radiation protection is described, including the most recent achievements in space radiation transport codes and shielding approaches. Although deterministic and Monte Carlo transport codes can now describe well the interaction of cosmic rays with matter, more accurate double-differential nuclear cross sections are needed to improve the codes. Energy deposition in biological molecules and related effects should also be developed to achieve accurate risk models for long-term exploratory missions. Passive shielding can be effective for solar particle events; however, it is limited for galactic cosmic rays (GCR). Active shielding would have to overcome challenging technical hurdles to protect against GCR. Thus, improved risk assessment and genetic and biomedical approaches are a more likely solution to GCR radiation protection issues.

  12. Meeting Radiation Protection Requirements and Reducing Spacecraft Mass - A Multifunctional Materials Approach

    NASA Technical Reports Server (NTRS)

    Atwell, William; Koontz, Steve; Reddell, Brandon; Rojdev, Kristina; Franklin, Jennifer

    2010-01-01

    Both crew and radio-sensitive systems, especially electronics must be protected from the effects of the space radiation environment. One method of mitigating this radiation exposure is to use passive-shielding materials. In previous vehicle designs such as the International Space Station (ISS), materials such as aluminum and polyethylene have been used as parasitic shielding to protect crew and electronics from exposure, but these designs add mass and decrease the amount of usable volume inside the vehicle. Thus, it is of interest to understand whether structural materials can also be designed to provide the radiation shielding capability needed for crew and electronics, while still providing weight savings and increased useable volume when compared against previous vehicle shielding designs. In this paper, we present calculations and analysis using the HZETRN (deterministic) and FLUKA (Monte Carlo) codes to investigate the radiation mitigation properties of these structural shielding materials, which includes graded-Z and composite materials. This work is also a follow-on to an earlier paper, that compared computational results for three radiation transport codes, HZETRN, HETC, and FLUKA, using the Feb. 1956 solar particle event (SPE) spectrum. In the following analysis, we consider the October 1989 Ground Level Enhanced (GLE) SPE as the input source term based on the Band function fitting method. Using HZETRN and FLUKA, parametric absorbed doses at the center of a hemispherical structure on the lunar surface are calculated for various thicknesses of graded-Z layups and an all-aluminum structure. HZETRN and FLUKA calculations are compared and are in reasonable (18% to 27%) agreement. Both codes are in agreement with respect to the predicted shielding material performance trends. The results from both HZETRN and FLUKA are analyzed and the radiation protection properties and potential weight savings of various materials and materials lay-ups are compared.

  13. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  14. An efficient HZETRN (a galactic cosmic ray transport code)

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.

    1992-01-01

    An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.

  15. Unsteady adjoint for large eddy simulation of a coupled turbine stator-rotor system

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi; Laskowski, Gregory

    2016-11-01

    Unsteady fluid flow simulations like large eddy simulation are crucial in capturing key physics in turbomachinery applications like separation and wake formation in flow over a turbine vane with a downstream blade. To determine how sensitive the design objectives of the coupled system are to control parameters, an unsteady adjoint is needed. It enables the computation of the gradient of an objective with respect to a large number of inputs in a computationally efficient manner. In this paper we present unsteady adjoint solutions for a coupled turbine stator-rotor system. As the transonic fluid flows over the stator vane, the boundary layer transitions to turbulence. The turbulent wake then impinges on the rotor blades, causing early separation. This coupled system exhibits chaotic dynamics which causes conventional adjoint solutions to diverge exponentially, resulting in the corruption of the sensitivities obtained from the adjoint solutions for long-time simulations. In this presentation, adjoint solutions for aerothermal objectives are obtained through a localized adjoint viscosity injection method which aims to stabilize the adjoint solution and maintain accurate sensitivities. Preliminary results obtained from the supercomputer Mira will be shown in the presentation.

  16. Adjoint-based optimization of PDEs in moving domains

    NASA Astrophysics Data System (ADS)

    Protas, Bartosz; Liao, Wenyuan

    2008-02-01

    In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.

  17. Forward and adjoint spectral-element simulations of seismic wave propagation using hardware accelerators

    NASA Astrophysics Data System (ADS)

    Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri

    2015-04-01

    Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.

  18. Evaluation of an alternative shielding materials for F-127 transport package

    NASA Astrophysics Data System (ADS)

    Gual, Maritza R.; Mesquita, Amir Z.; Pereira, Cláubia

    2018-03-01

    Lead is used as radiation shielding material for the Nordion's F-127 source shipping container is used for transport and storage of the GammaBeam -127's cobalt-60 source of the Nuclear Technology Development Center (CDTN) located in Belo Horizonte, Brazil. As an alternative, Th, Tl and WC have been evaluated as radiation shielding material. The goal is to check their behavior regarding shielding and dosing. Monte Carlo MCNPX code is used for the simulations. In the MCNPX calculation was used one cylinder as exclusion surface instead one sphere. Validation of MCNPX gamma doses calculations was carried out through comparison with experimental measurements. The results show that tungsten carbide WC is better shielding material for γ-ray than lead shielding.

  19. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less

  20. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system.. Shennan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this (Akgun et al., 1998b). SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  1. 34. DETAILS AND SECTIONS OF SHIELDING TANK FUEL ELEMENT SUPPORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. DETAILS AND SECTIONS OF SHIELDING TANK FUEL ELEMENT SUPPORT FRAME. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-S-4. INEL INDEX CODE NUMBER: 075 0701 60 851 151978. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  2. Source Parameter Estimation using the Second-order Closure Integrated Puff Model

    DTIC Science & Technology

    The sensor measurements are categorized as triggered and non-triggered based on the recorded concentration measurements and a threshold...concentration value. Using each measured value, sources of adjoint material are created from the triggered and non-triggered sensors, and the adjoint transport...equations are solved to predict the adjoint concentration fields. The adjoint source strength is inversely proportional to the concentration measurement

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richer, Jeff; Frimeth, Jeff; Nesbitt, James

    Purpose: In Ontario, shielding for all X-ray machines, including CT scanners, must be evaluated according to Safety Code 20A (Health Canada, 1983) which is based on NCRP-49 (NCRP, 1976). NCRP-147 (NCRP, 2004) is the international standard for shielding calculations of CT scanners and is also referenced in Safety Code 35 (Health Canada, 2008) which, was published to supersede SC20A. The goal of this work is to demonstrate the cost effectiveness of NCRP-147 for CT scanner shielding. Methods: CT scanner shielding calculations are performed using SC20A and NCRP-147: A room located on the third floor with the nearest building 75m awaymore » A room with high occupancy uncontrolled adjacent spaces Two side by side rooms on the main floor Results: 1. SC20A: The exterior windows required 0.1mm of Pb to protect the public who may occupy the building at 75m. 1. NCRP-147: No additional shielding required. 2. SC20A: Two walls adjacent to high occupancy uncontrolled space required an additional 1.58mm Pb. 2. NCRP-147: No additional shielding required. 3. SC20A: The entire floor and ceiling slabs in both rooms required an additional 0.79mm Pb. In addition, 0.79mm Pb was added to the walls from the ceiling to overlap the existing Pb shielding in the walls. 3. NCRP-147: No additional shielding required. Conclusion: The application of NCRP Report No. 147 affords the required protection to staff and the public, in the true spirit of the ALARA principle, taking into account relevant social and economic factors.« less

  4. Shielding of relativistic protons.

    PubMed

    Bertucci, A; Durante, M; Gialanella, G; Grossi, G; Manti, L; Pugliese, M; Scampoli, P; Mancusi, D; Sihver, L; Rusek, A

    2007-06-01

    Protons are the most abundant element in the galactic cosmic radiation, and the energy spectrum peaks around 1 GeV. Shielding of relativistic protons is therefore a key problem in the radiation protection strategy of crewmembers involved in long-term missions in deep space. Hydrogen ions were accelerated up to 1 GeV at the NASA Space Radiation Laboratory, Brookhaven National Laboratory, New York. The proton beam was also shielded with thick (about 20 g/cm2) blocks of lucite (PMMA) or aluminium (Al). We found that the dose rate was increased 40-60% by the shielding and decreased as a function of the distance along the axis. Simulations using the General-Purpose Particle and Heavy-Ion Transport code System (PHITS) show that the dose increase is mostly caused by secondary protons emitted by the target. The modified radiation field after the shield has been characterized for its biological effectiveness by measuring chromosomal aberrations in human peripheral blood lymphocytes exposed just behind the shield block, or to the direct beam, in the dose range 0.5-3 Gy. Notwithstanding the increased dose per incident proton, the fraction of aberrant cells at the same dose in the sample position was not significantly modified by the shield. The PHITS code simulations show that, albeit secondary protons are slower than incident nuclei, the LET spectrum is still contained in the low-LET range (<10 keV/microm), which explains the approximately unitary value measured for the relative biological effectiveness.

  5. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  6. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  7. Universal Racah matrices and adjoint knot polynomials: Arborescent knots

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2016-04-01

    By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.

  8. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  9. Development of a new version of the Vehicle Protection Factor Code (VPF3)

    NASA Astrophysics Data System (ADS)

    Jamieson, Terrance J.

    1990-10-01

    The Vehicle Protection Factor (VPF) Code is an engineering tool for estimating radiation protection afforded by armoured vehicles and other structures exposed to neutron and gamma ray radiation from fission, thermonuclear, and fusion sources. A number of suggestions for modifications have been offered by users of early versions of the code. These include: implementing some of the more advanced features of the air transport rating code, ATR5, used to perform the air over ground radiation transport analyses; allowing the ability to study specific vehicle orientations within the free field; implementing an adjoint transport scheme to reduce the number of transport runs required; investigating the possibility of accelerating the transport scheme; and upgrading the computer automated design (CAD) package used by VPF. The generation of radiation free field fluences for infinite air geometries as required for aircraft analysis can be accomplished by using ATR with the air over ground correction factors disabled. Analysis of the effects of fallout bearing debris clouds on aircraft will require additional modelling of VPF.

  10. Self-adjointness of deformed unbounded operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Much, Albert

    2015-09-15

    We consider deformations of unbounded operators by using the novel construction tool of warped convolutions. By using the Kato-Rellich theorem, we show that unbounded self-adjoint deformed operators are self-adjoint if they satisfy a certain condition. This condition proves itself to be necessary for the oscillatory integral to be well-defined. Moreover, different proofs are given for self-adjointness of deformed unbounded operators in the context of quantum mechanics and quantum field theory.

  11. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  12. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  13. Constructing the spectral web of rotating plasmas

    NASA Astrophysics Data System (ADS)

    Goedbloed, Hans

    2012-10-01

    Rotating plasmas are ubiquitous in nature. The theory of MHD stability of such plasmas, initiated a long time ago, has severely suffered from the wide spread misunderstanding that it necessarily involves non-self-adjoint operators. It has been shown (J.P. Goedbloed, PPCF 16, 074001, 2011; Goedbloed, Keppens and Poedts, Advanced Magnetohydrodynamics, Cambridge, 2010) that, on the contrary, spectral theory of moving plasmas can be constructed entirely on the basis of energy conservation and self-adjointness of the occurring operators. The spectral web is a further development along this line. It involves the construction of a network of curves in the complex omega-plane associated with the complex complementary energy, which is the energy needed to maintain harmonic time dependence in an open system. Vanishing of that energy, at the intersections of the mentioned curves, yields the eigenvalues of the closed system. This permits to consider the enormous diversity of MHD instabilities of rotating tokamaks, accretion disks about compact objects, and jets emitted from those objects, from a single view point. This will be illustrated with results obtained with a new spectral code (ROC).

  14. Computer aided radiation analysis for manned spacecraft

    NASA Technical Reports Server (NTRS)

    Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.

    1991-01-01

    In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.

  15. Acceleration of MCNP calculations for small pipes configurations by using Weigth Windows Importance cards created by the SN-3D ATTILA

    NASA Astrophysics Data System (ADS)

    Castanier, Eric; Paterne, Loic; Louis, Céline

    2017-09-01

    In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.

  16. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    PubMed

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Shielding NSLS-II light source: Importance of geometry for calculating radiation levels from beam losses

    NASA Astrophysics Data System (ADS)

    Kramer, S. L.; Ghosh, V. J.; Breitfeller, M.; Wahl, W.

    2016-11-01

    Third generation high brightness light sources are designed to have low emittance and high current beams, which contribute to higher beam loss rates that will be compensated by Top-Off injection. Shielding for these higher loss rates will be critical to protect the projected higher occupancy factors for the users. Top-Off injection requires a full energy injector, which will demand greater consideration of the potential abnormal beam miss-steering and localized losses that could occur. The high energy electron injection beam produces significantly higher neutron component dose to the experimental floor than a lower energy beam injection and ramped operations. Minimizing this dose will require adequate knowledge of where the miss-steered beam can occur and sufficient EM shielding close to the loss point, in order to attenuate the energy of the particles in the EM shower below the neutron production threshold (<10 MeV), which will spread the incident energy on the bulk shield walls and thereby the dose penetrating the shield walls. Designing supplemental shielding near the loss point using the analytic shielding model is shown to be inadequate because of its lack of geometry specification for the EM shower process. To predict the dose rates outside the tunnel requires detailed description of the geometry and materials that the beam losses will encounter inside the tunnel. Modern radiation shielding Monte-Carlo codes, like FLUKA, can handle this geometric description of the radiation transport process in sufficient detail, allowing accurate predictions of the dose rates expected and the ability to show weaknesses in the design before a high radiation incident occurs. The effort required to adequately define the accelerator geometry for these codes has been greatly reduced with the implementation of the graphical interface of FLAIR to FLUKA. This made the effective shielding process for NSLS-II quite accurate and reliable. The principles used to provide supplemental shielding to the NSLS-II accelerators and the lessons learned from this process are presented.

  18. Radiation protection using Martian surface materials in human exploration of Mars

    NASA Technical Reports Server (NTRS)

    Kim, M. H.; Thibeault, S. A.; Wilson, J. W.; Heilbronn, L.; Kiefer, R. L.; Weakley, J. A.; Dueber, J. L.; Fogarty, T.; Wilkins, R.

    2001-01-01

    To develop materials for shielding astronauts from the hazards of GCR, natural Martian surface materials are considered for their potential as radiation shielding for manned Mars missions. The modified radiation fluences behind various kinds of Martian rocks and regolith are determined by solving the Boltzmann equation using NASA Langley's HZETRN code along with the 1977 Solar Minimum galactic cosmic ray environmental model. To develop structural shielding composite materials for Martian surface habitats, theoretical predictions of the shielding properties of Martian regolith/polyimide composites has been computed to assess their shielding effectiveness. Adding high-performance polymer binders to Martian regolith to enhance structural properties also enhances the shielding properties of these composites because of the added hydrogenous constituents. Heavy ion beam testing of regolith simulant/polyimide composites is planned to validate this prediction. Characterization and proton beam tests are performed to measure structural properties and to compare the shielding effects on microelectronic devices, respectively.

  19. Sensitivity of Lumped Constraints Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.

    1999-01-01

    Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.

  20. Almost commuting self-adjoint matrices: The real and self-dual cases

    NASA Astrophysics Data System (ADS)

    Loring, Terry A.; Sørensen, Adam P. W.

    2016-08-01

    We show that a pair of almost commuting self-adjoint, symmetric matrices is close to a pair of commuting self-adjoint, symmetric matrices (in a uniform way). Moreover, we prove that the same holds with self-dual in place of symmetric and also for paths of self-adjoint matrices. Since a symmetric, self-adjoint matrix is real, we get a real version of Huaxin Lin’s famous theorem on almost commuting matrices. Similarly, the self-dual case gives a version for matrices over the quaternions. To prove these results, we develop a theory of semiprojectivity for real C*-algebras and also examine various definitions of low-rank for real C*-algebras.

  1. Green's function methods in heavy ion shielding

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.

    1993-01-01

    An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.

  2. Extension of the BRYNTRN code to monoenergetic light ion beams

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.

    1994-01-01

    A monoenergetic version of the BRYNTRN transport code is extended to beam transport of light ions (H-2, H-3, He-3, and He-4) in shielding materials (thick targets). The redistribution of energy in nuclear reactions is included in transport solutions that use nuclear fragmentation models. We also consider an equilibrium target-fragment spectrum for nuclei with mass number greater than four to include target fragmentation effects in the linear energy transfer (LET) spectrum. Illustrative results for water and aluminum shielding, including energy and LET spectra, are discussed for high-energy beams of H-2 and He-4.

  3. Calculation of self–shielding factor for neutron activation experiments using GEANT4 and MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero–Barrientos, Jaime, E-mail: jaromero@ing.uchile.cl; Universidad de Chile, DFI, Facultad de Ciencias Físicas Y Matemáticas, Avenida Blanco Encalada 2008, Santiago; Molina, F.

    2016-07-07

    The neutron self–shielding factor G as a function of the neutron energy was obtained for 14 pure metallic samples in 1000 isolethargic energy bins from 1·10{sup −5}eV to 2·10{sup 7}eV using Monte Carlo simulations in GEANT4 and MCNP6. The comparison of these two Monte Carlo codes shows small differences in the final self–shielding factor mostly due to the different cross section databases that each program uses.

  4. Development and Applications of the FV3 GEOS-5 Adjoint Modeling System

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel; Kim, Jong G.; Lin, Shian-Jiann; Errico, Ron; Gelaro, Ron; Kent, James; Coy, Larry; Doyle, Jim; Goldstein, Alex

    2017-01-01

    GMAO has developed a highly sophisticated adjoint modeling system based on the most recent version of the finite volume cubed sphere (FV3) dynamical core. This provides a mechanism for investigating sensitivity to initial conditions and examining observation impacts. It also allows for the computation of singular vectors and for the implementation of hybrid 4DVAR. In this work we will present the scientific assessment of the new adjoint system and show results from a number of research application of the adjoint system.

  5. 36. DETAILS AND SECTIONS OF SHIELDING TANK, FUEL ELEMENT SUPPORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    36. DETAILS AND SECTIONS OF SHIELDING TANK, FUEL ELEMENT SUPPORT FRAME AND SUPPORT PLATFORM, AND SAFETY MECHANISM ASSEMBLY (SPRING-LOADED HINGE). F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-S-1. INEL INDEX CODE NUMBER: 075 0701 60 851 151975. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  6. Radiation protection effectiveness of a proposed magnetic shielding concept for manned Mars missions

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Shinn, J. L.; Nealy, John E.; Simonsen, Lisa C.

    1990-01-01

    The effectiveness of a proposed concept for shielding a manned Mars vehicle using a confined magnetic field configuration is evaluated by computing estimated crew radiation exposures resulting from galactic cosmic rays and a large solar flare event. In the study the incident radiation spectra are transported through the spacecraft structure/magnetic shield using the deterministic space radiation transport computer codes developed at Langley Research Center. The calculated exposures unequivocally demonstrate that magnetic shielding could provide an effective barrier against solar flare protons but is virtually transparent to the more energetic galactic cosmic rays. It is then demonstrated that through proper selection of materials and shield configuration, adequate and reliable bulk material shielding can be provided for the same total mass as needed to generate and support the more risky magnetic field configuration.

  7. Tangent Adjoint Methods In a Higher-Order Space-Time Discontinuous-Galerkin Solver For Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo; Murman, Scott; Blonigan, Patrick; Garai, Anirban

    2017-01-01

    Presented space-time adjoint solver for turbulent compressible flows. Confirmed failure of traditional sensitivity methods for chaotic flows. Assessed rate of exponential growth of adjoint for practical 3D turbulent simulation. Demonstrated failure of short-window sensitivity approximations.

  8. Topology optimization of thermal fluid flows with an adjoint Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Dugast, Florian; Favennec, Yann; Josset, Christophe; Fan, Yilin; Luo, Lingai

    2018-07-01

    This paper presents an adjoint Lattice Boltzmann Method (LBM) coupled with the Level-Set Method (LSM) for topology optimization of thermal fluid flows. The adjoint-state formulation implies discrete velocity directions in order to take into account the LBM boundary conditions. These boundary conditions are introduced at the beginning of the adjoint-state method as the LBM residuals, so that the adjoint-state boundary conditions can appear directly during the adjoint-state equation formulation. The proposed method is tested with 3 numerical examples concerning thermal fluid flows, but with different objectives: minimization of the mean temperature in the domain, maximization of the heat evacuated by the fluid, and maximization of the heat exchange with heated solid parts. This latter example, treated in several articles, is used to validate our method. In these optimization problems, a limitation of the maximal pressure drop and of the porosity (number of fluid elements) is also applied. The obtained results demonstrate that the method is robust and effective for solving topology optimization of thermal fluid flows.

  9. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  10. Solving the transport equation with quadratic finite elements: Theory and applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, J.M.

    1997-12-31

    At the 4th Joint Conference on Computational Mathematics, the author presented a paper introducing a new quadratic finite element scheme (QFEM) for solving the transport equation. In the ensuing year the author has obtained considerable experience in the application of this method, including solution of eigenvalue problems, transmission problems, and solution of the adjoint form of the equation as well as the usual forward solution. He will present detailed results, and will also discuss other refinements of his transport codes, particularly for 3-dimensional problems on rectilinear and non-rectilinear grids.

  11. SAM-CE; A Three Dimensional Monte Carlo Code for the Dolution of the Forward Neutron and Forward and Adjoint Gamma Ray Transport Equations. Revision C

    DTIC Science & Technology

    1974-07-31

    Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other sccres of interest, e.g., collision... Multiplicities ...... . . . . 43 2,3.5.2 Photon Production Cross Sections. . 44 2.3.5.3 Anisotropy of Photon Production . . 44 2.3.5.4 Continuous...hepting, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either

  12. Development of a New 47-Group Library for the CASL Neutronics Simulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea

    The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less

  13. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  14. SU-E-T-556: Monte Carlo Generated Dose Distributions for Orbital Irradiation Using a Single Anterior-Posterior Electron Beam and a Hanging Lens Shield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duwel, D; Lamba, M; Elson, H

    Purpose: Various cancers of the eye are successfully treated with radiotherapy utilizing one anterior-posterior (A/P) beam that encompasses the entire content of the orbit. In such cases, a hanging lens shield can be used to spare dose to the radiosensitive lens of the eye to prevent cataracts. Methods: This research focused on Monte Carlo characterization of dose distributions resulting from a single A-P field to the orbit with a hanging shield in place. Monte Carlo codes were developed which calculated dose distributions for various electron radiation energies, hanging lens shield radii, shield heights above the eye, and beam spoiler configurations.more » Film dosimetry was used to benchmark the coding to ensure it was calculating relative dose accurately. Results: The Monte Carlo dose calculations indicated that lateral and depth dose profiles are insensitive to changes in shield height and electron beam energy. Dose deposition was sensitive to shield radius and beam spoiler composition and height above the eye. Conclusion: The use of a single A/P electron beam to treat cancers of the eye while maintaining adequate lens sparing is feasible. Shield radius should be customized to have the same radius as the patient’s lens. A beam spoiler should be used if it is desired to substantially dose the eye tissues lying posterior to the lens in the shadow of the lens shield. The compromise between lens sparing and dose to diseased tissues surrounding the lens can be modulated by varying the beam spoiler thickness, spoiler material composition, and spoiler height above the eye. The sparing ratio is a metric that can be used to evaluate the compromise between lens sparing and dose to surrounding tissues. The higher the ratio, the more dose received by the tissues immediately posterior to the lens relative to the dose received by the lens.« less

  15. Radiation shielding quality assurance

    NASA Astrophysics Data System (ADS)

    Um, Dallsun

    For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.

  16. Designing divertor targets for uniform power load

    NASA Astrophysics Data System (ADS)

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2015-08-01

    Divertor design for next step fusion reactors heavily relies on 2D edge plasma modeling with codes as e.g. B2-EIRENE. While these codes are typically used in a design-by-analysis approach, in previous work we have shown that divertor design can alternatively be posed as a mathematical optimization problem, and solved very efficiently using adjoint methods adapted from computational aerodynamics. This approach has been applied successfully to divertor target shape design for more uniform power load. In this paper, the concept is further extended to include all contributions to the target power load, with particular focus on radiation. In a simplified test problem, we show the potential benefits of fully including the radiation load in the design cycle as compared to only assessing this load in a post-processing step.

  17. Improved Adjoint-Operator Learning For A Neural Network

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad; Barhen, Jacob

    1995-01-01

    Improved method of adjoint-operator learning reduces amount of computation and associated computational memory needed to make electronic neural network learn temporally varying pattern (e.g., to recognize moving object in image) in real time. Method extension of method described in "Adjoint-Operator Learning for a Neural Network" (NPO-18352).

  18. Adjoint method and runaway electron avalanche

    DOE PAGES

    Liu, Chang; Brennan, Dylan P.; Boozer, Allen H.; ...

    2016-12-16

    The adjoint method for the study of runaway electron dynamics in momentum space Liu et al (2016 Phys. Plasmas 23 010702) is rederived using the Green's function method, for both the runaway probability function (RPF) and the expected loss time (ELT). The RPF and ELT obtained using the adjoint method are presented, both with and without the synchrotron radiation reaction force. In conclusion, the adjoint method is then applied to study the runaway electron avalanche. Both the critical electric field and the growth rate for the avalanche are calculated using this fast and novel approach.

  19. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  20. LPT. Shield test facility (TAN645 and 646). Floor plan and ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-645 and -646). Floor plan and room names. Ralph M. Parsons 1229-17 ANP/GE-6-645-A-1. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-0645/0646-00-693-107347 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  1. LPT. Shield test facility (TAN646). Sections and details of water ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-646). Sections and details of water management areas. Ralph M. Parsons 1229-17 ANP/GE-6-646-P-3. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-0646-51-693-107388 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  2. Optimization of Aerospace Structure Subject to Damage Tolerance Criteria

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.

    1999-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  3. High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices

    NASA Astrophysics Data System (ADS)

    Dunham, Benjamin Z.

    This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.

  4. NPR Reactor shield calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peterson, E.G.

    1961-09-27

    At the request of IPD Personnel, calculations on neutron and gamma attenuation were made for the NPR shield. The calculations were made using a new shielding computer code developed for the IBM 7090. The calculations show the thermal neutron flux, total neutron dose rate, and gamma dose rate distribution through the entire shield assembly. The calculations show that the side and top primary shield design is adequate to reduce the radiation level below design tolerances. The radiation leakage through the front shield was higher than the design tolerances. Two alternate biological shield materials were studied for use on the frontmore » face. These two materials were iron serpentine concrete mixtures with densities of 245 lb/ft{sup 3} and 265 lb/ft{sup 3} (designated by I-S-245-P and I-S-265-P, respectively). Both of these concretes reduced the radiation below design tolerances. It is recommended that the present front face biological shield be changed from I-S-220-P to I-S-245-P. With this change the NPR shield is adequate according to these calculations. The calculations reported here do not include leakage through penetration in the shield.« less

  5. Preliminary analyses of space radiation protection for lunar base surface systems

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Wilson, John W.; Townsend, Lawrence W.

    1989-01-01

    Radiation shielding analyses are performed for candidate lunar base habitation modules. The study primarily addresses potential hazards due to contributions from the galactic cosmic rays. The NASA Langley Research Center's high energy nucleon and heavy ion transport codes are used to compute propagation of radiation through conventional and regolith shield materials. Computed values of linear energy transfer are converted to biological dose-equivalent using quality factors established by the International Commision of Radiological Protection. Special fluxes of heavy charged particles and corresponding dosimetric quantities are computed for a series of thicknesses in various shield media and are used as an input data base for algorithms pertaining to specific shielded geometries. Dosimetric results are presented as isodose contour maps of shielded configuration interiors. The dose predictions indicate that shielding requirements are substantial, and an abbreviated uncertainty analysis shows that better definition of the space radiation environment as well as improvement in nuclear interaction cross-section data can greatly increase the accuracy of shield requirement predictions.

  6. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  7. Shielding analysis of the Microtron MT-25 bunker using the MCNP-4C code and NCRP Report 51.

    PubMed

    Casanova, A O; López, N; Gelen, A; Guevara, M V Manso; Díaz, O; Cimino, L; D'Alessandro, K; Melo, J C

    2004-01-01

    A cyclic electron accelerator Microtron MT-25 will be installed in Havana, Cuba. Electrons, neutrons and gamma radiation up to 25 MeV can be produced in the MT-25. A detailed shielding analysis for the bunker is carried out using two ways: the NCRP-51 Report and the Monte Carlo Method (MCNP-4C Code). The walls and ceiling thicknesses are estimated with dose constraints of 0.5 and 20 mSv y(-1), respectively, and an area occupancy factor of 1/16. Both results are compared and a preliminary bunker design is shown. Copyright 2004 Oxford University Press

  8. Mesos-scale modeling of irradiation in pressurized water reactor concrete biological shields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Pape, Yann; Huang, Hai

    Neutron irradiation exposure causes aggregate expansion, namely radiation-induced volumetric expansion (RIVE). The structural significance of RIVE on a portion of a prototypical pressurized water reactor (PWR) concrete biological shield (CBS) is investigated by using a meso- scale nonlinear concrete model with inputs from an irradiation transport code and a coupled moisture transport-heat transfer code. RIVE-induced severe cracking onset appears to be triggered by the ini- tial shrinkage-induced cracking and propagates to a depth of > 10 cm at extended operation of 80 years. Relaxation of the cement paste stresses results in delaying the crack propagation by about 10 years.

  9. Bose-Fermi degeneracies in large N adjoint QCD

    DOE PAGES

    Basar, Gokce; Cherman, Aleksey; McGady, David

    2015-07-06

    Here, we analyze the large N limit of adjoint QCD, an SU( N) gauge theory with N f flavors of massless adjoint Majorana fermions, compactified on S 3 × S 1. We focus on the weakly-coupled confining small- S 3 regime. If the fermions are given periodic boundary conditions on S 1, we show that there are large cancellations between bosonic and fermionic contributions to the twisted partition function. These cancellations follow a pattern previously seen in the context of misaligned supersymmetry, and lead to the absence of Hagedorn instabilities for any S 1 size L, even though the bosonicmore » and fermionic densities of states both have Hagedorn growth. Adjoint QCD stays in the confining phase for any L ~ N 0, explaining how it is able to enjoy large N volume independence for any L. The large N boson-fermion cancellations take place in a setting where adjoint QCD is manifestly non-supersymmetric at any finite N, and are consistent with the recent conjecture that adjoint QCD has emergent fermionic symmetries in the large N limit.« less

  10. Assessing the Impact of Observations on Numerical Weather Forecasts Using the Adjoint Method

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2012-01-01

    The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. This talk provides a general overview of the adjoint method, including the theoretical basis and practical implementation of the technique. Results are presented from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. When performed in conjunction with standard observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies may be important for optimizing the use of the current observational network and defining requirements for future observing systems

  11. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  12. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  13. Shielding NSLS-II light source: Importance of geometry for calculating radiation levels from beam losses

    DOE PAGES

    Kramer, S. L.; Ghosh, V. J.; Breitfeller, M.; ...

    2016-08-10

    We present that third generation high brightness light sources are designed to have low emittance and high current beams, which contribute to higher beam loss rates that will be compensated by Top-Off injection. Shielding for these higher loss rates will be critical to protect the projected higher occupancy factors for the users. Top-Off injection requires a full energy injector, which will demand greater consideration of the potential abnormal beam miss-steering and localized losses that could occur. The high energy electron injection beam produces significantly higher neutron component dose to the experimental floor than a lower energy beam injection and rampedmore » operations. Minimizing this dose will require adequate knowledge of where the miss-steered beam can occur and sufficient EM shielding close to the loss point, in order to attenuate the energy of the particles in the EM shower below the neutron production threshold (<10 MeV), which will spread the incident energy on the bulk shield walls and thereby the dose penetrating the shield walls. Designing supplemental shielding near the loss point using the analytic shielding model is shown to be inadequate because of its lack of geometry specification for the EM shower process. To predict the dose rates outside the tunnel requires detailed description of the geometry and materials that the beam losses will encounter inside the tunnel. Modern radiation shielding Monte-Carlo codes, like FLUKA, can handle this geometric description of the radiation transport process in sufficient detail, allowing accurate predictions of the dose rates expected and the ability to show weaknesses in the design before a high radiation incident occurs. The effort required to adequately define the accelerator geometry for these codes has been greatly reduced with the implementation of the graphical interface of FLAIR to FLUKA. In conclusion, this made the effective shielding process for NSLS-II quite accurate and reliable. The principles used to provide supplemental shielding to the NSLS-II accelerators and the lessons learned from this process are presented.« less

  14. Use of adjoint methods in the probabilistic finite element approach to fracture mechanics

    NASA Technical Reports Server (NTRS)

    Liu, Wing Kam; Besterfield, Glen; Lawrence, Mark; Belytschko, Ted

    1988-01-01

    The adjoint method approach to probabilistic finite element methods (PFEM) is presented. When the number of objective functions is small compared to the number of random variables, the adjoint method is far superior to the direct method in evaluating the objective function derivatives with respect to the random variables. The PFEM is extended to probabilistic fracture mechanics (PFM) using an element which has the near crack-tip singular strain field embedded. Since only two objective functions (i.e., mode I and II stress intensity factors) are needed for PFM, the adjoint method is well suited.

  15. The Tangent Linear and Adjoint of the FV3 Dynamical Core: Development and Applications

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel

    2018-01-01

    GMAO (NASA's Global Modeling and Assimilation Office) has developed a highly sophisticated adjoint modeling system based on the most recent version of the finite volume cubed sphere (FV3) dynamical core. This provides a mechanism for investigating sensitivity to initial conditions and examining observation impacts. It also allows for the computation of singular vectors and for the implementation of hybrid 4DVAR (4-Dimensional Variational Assimilation). In this work we will present the scientific assessment of the new adjoint system and show results from a number of research application of the adjoint system.

  16. A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.

  17. Four-Dimensional Data Assimilation Using the Adjoint Method

    NASA Astrophysics Data System (ADS)

    Bao, Jian-Wen

    The calculus of variations is used to confirm that variational four-dimensional data assimilation (FDDA) using the adjoint method can be implemented when the numerical model equations have a finite number of first-order discontinuous points. These points represent the on/off switches associated with physical processes, for which the Jacobian matrix of the model equation does not exist. Numerical evidence suggests that, in some situations when the adjoint method is used for FDDA, the temperature field retrieved using horizontal wind data is numerically not unique. A physical interpretation of this type of non-uniqueness of the retrieval is proposed in terms of energetics. The adjoint equations of a numerical model can also be used for model-parameter estimation. A general computational procedure is developed to determine the size and distribution of any internal model parameter. The procedure is then applied to a one-dimensional shallow -fluid model in the context of analysis-nudging FDDA: the weighting coefficients used by the Newtonian nudging technique are determined. The sensitivity of these nudging coefficients to the optimal objectives and constraints is investigated. Experiments of FDDA using the adjoint method are conducted using the dry version of the hydrostatic Penn State/NCAR mesoscale model (MM4) and its adjoint. The minimization procedure converges and the initialization experiment is successful. Temperature-retrieval experiments involving an assimilation of the horizontal wind are also carried out using the adjoint of MM4.

  18. Receptivity of the compressible mixing layer

    NASA Astrophysics Data System (ADS)

    Barone, Matthew F.; Lele, Sanjiva K.

    2005-09-01

    Receptivity of compressible mixing layers to general source distributions is examined by a combined theoretical/computational approach. The properties of solutions to the adjoint Navier Stokes equations are exploited to derive expressions for receptivity in terms of the local value of the adjoint solution. The result is a description of receptivity for arbitrary small-amplitude mass, momentum, and heat sources in the vicinity of a mixing-layer flow, including the edge-scattering effects due to the presence of a splitter plate of finite width. The adjoint solutions are examined in detail for a Mach 1.2 mixing-layer flow. The near field of the adjoint solution reveals regions of relatively high receptivity to direct forcing within the mixing layer, with receptivity to nearby acoustic sources depending on the source type and position. Receptivity ‘nodes’ are present at certain locations near the splitter plate edge where the flow is not sensitive to forcing. The presence of the nodes is explained by interpretation of the adjoint solution as the superposition of incident and scattered fields. The adjoint solution within the boundary layer upstream of the splitter-plate trailing edge reveals a mechanism for transfer of energy from boundary-layer stability modes to Kelvin Helmholtz modes. Extension of the adjoint solution to the far field using a Kirchhoff surface gives the receptivity of the mixing layer to incident sound from distant sources.

  19. METHODS OF CALCULATION FOR THE TREATMENT OF SHIELD HETEROGENEITIES IN THE PROTOTYPE FAST REACTOR.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broughton, J.; Butler, J.; Brimstone, M.

    1969-10-31

    The radial shield of the sodium-cooled Prototype Fast Reactor is composed of graphite rods enclosed in steel tubes which are arranged in a lattice of seven rows round the periphery of the breeder. The outside diameter of these rods increases by about a factor of 2 between the inner temperature of about 600 deg C. The dimensions of the steel, graphite and sodium regions are large compared with the mean free paths of the predomination neutrons at intermediate energies; and homogenisation of the shield seriously underestimates the penetration, which is also enhanced by the presence of numerous irregularities associated withmore » nucleonic instrument thimbels, refuelling mechanisms and the primary coolant circuit. Methods of calculation have been developed for the solution of these problems, using both diffusion-theory and Monte Carlo techniques. The diffusion calculations have been accomplished with the COMPRASH and ATTOW codes; and a prototype Monet Carlo code named MOB has been developed, which takes a proper account of the radial shield geometry. The theoretical predictions are compared with measurements made in typical shield arrays on LIDO at Harwell and on the zero-energy fast reactor, ZEBRA, at Winfrith. The diffusion-theory and Monte Carlo approaches are also assessed as design tools taking into consideration accuracy, data preparation and computing time requirements. (auth)« less

  20. LPT. Shield test facility (TAN645 and 646). Basement and subbasement ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-645 and -646). Basement and sub-basement plan. Stairway plans and details. Ralph M. Parsons 1229-17 ANP/GE-6-645-A-2. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-0645/0646-00-693-107348 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  1. FET. Control and equipment building (TAN630). Sections. Earth cover. Shielded ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    FET. Control and equipment building (TAN-630). Sections. Earth cover. Shielded access entries for personnel and vehicles. Ralph M. Parsons 1229-2 ANP/GE-5-630-A-3. Date: March 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 036-0630-00-693-107082 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  2. Assessment of background hydrogen by the Monte Carlo computer code MCNP-4A during measurements of total body nitrogen.

    PubMed

    Ryde, S J; al-Agel, F A; Evans, C J; Hancock, D A

    2000-05-01

    The use of a hydrogen internal standard to enable the estimation of absolute mass during measurement of total body nitrogen by in vivo neutron activation is an established technique. Central to the technique is a determination of the H prompt gamma ray counts arising from the subject. In practice, interference counts from other sources--e.g., neutron shielding--are included. This study reports use of the Monte Carlo computer code, MCNP-4A, to investigate the interference counts arising from shielding both with and without a phantom containing a urea solution. Over a range of phantom size (depth 5 to 30 cm, width 20 to 40 cm), the counts arising from shielding increased by between 4% and 32% compared with the counts without a phantom. For any given depth, the counts increased approximately linearly with width. For any given width, there was little increase for depths exceeding 15 centimeters. The shielding counts comprised between 15% and 26% of those arising from the urea phantom. These results, although specific to the Swansea apparatus, suggest that extraneous hydrogen counts can be considerable and depend strongly on the subject's size.

  3. Visualising Earth's Mantle based on Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Pugmire, D.; Lefebvre, M. P.; Hill, J.; Komatitsch, D.; Peter, D. B.; Podhorszki, N.; Tromp, J.

    2017-12-01

    Recent advances in 3D wave propagation solvers and high-performance computing have enabled regional and global full-waveform inversions. Interpretation of tomographic models is often done on visually. Robust and efficient visualization tools are necessary to thoroughly investigate large model files, particularly at the global scale. In collaboration with Oak Ridge National Laboratory (ORNL), we have developed effective visualization tools and used for visualization of our first-generation global model, GLAD-M15 (Bozdag et al. 2016). VisIt (https://wci.llnl.gov/simulation/computer-codes/visit/) is used for initial exploration of the models and for extraction of seismological features. The broad capability of VisIt, and its demonstrated scalability proved valuable for experimenting with different visualization techniques, and in the creation of timely results. Utilizing VisIt's plugin-architecture, a data reader plugin was developed, which reads the ADIOS (https://www.olcf.ornl.gov/center-projects/adios/) format of our model files. Blender (https://www.blender.org) is used for the setup of lighting, materials, camera paths and rendering of geometry. Python scripting was used to control the orchestration of different geometries, as well as camera animation for 3D movies. While we continue producing 3D contour plots and movies for various seismic parameters to better visualize plume- and slab-like features as well as anisotropy throughout the mantle, our aim is to make visualization an integral part of our global adjoint tomography workflow to routinely produce various 2D cross-sections to facilitate examination of our models after each iteration. This will ultimately form the basis for use of pattern recognition techniques in our investigations. Simulations for global adjoint tomography are performed on ORNL's Titan system and visualization is done in parallel on ORNL's post-processing cluster Rhea.

  4. Adjoint tomography of the crust and upper mantle structure beneath the Kanto region using broadband seismograms

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takayuki; Obayashi, Masayuki; Peter, Daniel; Tono, Yoko; Tsuboi, Seiji

    2017-12-01

    A three-dimensional seismic wave speed model in the Kanto region of Japan was developed using adjoint tomography for application in the effective reproduction of observed waveforms. Starting with a model based on previous travel time tomographic results, we inverted the waveforms obtained at seismic broadband stations from 140 local earthquakes in the Kanto region to obtain the P- and S-wave speeds V p and V s . Additionally, all centroid times of the source solutions were determined before the structural inversion. The synthetic displacements were calculated using the spectral-element method (SEM) in which the Kanto region was parameterized using 16 million grid points. The model parameters V p and V s were updated iteratively by Newton's method using the misfit and Hessian kernels until the misfit between the observed and synthetic waveforms was minimized. Computations of the forward and adjoint simulations were conducted on the K computer in Japan. The optimized SEM code required a total of 6720 simulations using approximately 62,000 node hours to obtain the final model after 16 iterations. The proposed model reveals several anomalous areas with extremely low- V s values in comparison with those of the initial model. These anomalies were found to correspond to geological features, earthquake sources, and volcanic regions with good data coverage and resolution. The synthetic waveforms obtained using the newly proposed model for the selected earthquakes showed better fit than the initial model to the observed waveforms in different period ranges within 5-30 s. This result indicates that the model can accurately predict actual waveforms. [Figure not available: see fulltext.

  5. Solar proton exposure of an ICRU sphere within a complex structure part II: Ray-trace geometry.

    PubMed

    Slaba, Tony C; Wilson, John W; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z ≤ 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency. Published by Elsevier Ltd.

  6. Modified Laser and Thermos cell calculations on microcomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.

    1987-01-01

    In the course of designing and operating nuclear reactors, many fuel pin cell calculations are required to obtain homogenized cell cross sections as a function of burnup. In the interest of convenience and cost, it would be very desirable to be able to make such calculations on microcomputers. In addition, such a microcomputer code would be very helpful for educational course work in reactor computations. To establish the feasibility of making detailed cell calculations on a microcomputer, a mainframe cell code was compiled and run on a microcomputer. The computer code Laser, originally written in Fortran IV for the IBM-7090more » class of mainframe computers, is a cylindrical, one-dimensional, multigroup lattice cell program that includes burnup. It is based on the MUFT code for epithermal and fast group calculations, and Thermos for the thermal calculations. There are 50 fast and epithermal groups and 35 thermal groups. Resonances are calculated assuming a homogeneous system and then corrected for self-shielding, Dancoff, and Doppler by self-shielding factors. The Laser code was converted to run on a microcomputer. In addition, the Thermos portion of Laser was extracted and compiled separately to have available a stand alone thermal code.« less

  7. Comparing mass balance and adjoint methods for inverse modeling of nitrogen dioxide columns for global nitrogen oxide emissions

    NASA Astrophysics Data System (ADS)

    Cooper, Matthew; Martin, Randall V.; Padmanabhan, Akhila; Henze, Daven K.

    2017-04-01

    Satellite observations offer information applicable to top-down constraints on emission inventories through inverse modeling. Here we compare two methods of inverse modeling for emissions of nitrogen oxides (NOx) from nitrogen dioxide (NO2) columns using the GEOS-Chem chemical transport model and its adjoint. We treat the adjoint-based 4D-Var modeling approach for estimating top-down emissions as a benchmark against which to evaluate variations on the mass balance method. We use synthetic NO2 columns generated from known NOx emissions to serve as "truth." We find that error in mass balance inversions can be reduced by up to a factor of 2 with an iterative process that uses finite difference calculations of the local sensitivity of NO2 columns to a change in emissions. In a simplified experiment to recover local emission perturbations, horizontal smearing effects due to NOx transport are better resolved by the adjoint approach than by mass balance. For more complex emission changes, or at finer resolution, the iterative finite difference mass balance and adjoint methods produce similar global top-down inventories when inverting hourly synthetic observations, both reducing the a priori error by factors of 3-4. Inversions of simulated satellite observations from low Earth and geostationary orbits also indicate that both the mass balance and adjoint inversions produce similar results, reducing a priori error by a factor of 3. As the iterative finite difference mass balance method provides similar accuracy as the adjoint method, it offers the prospect of accurately estimating top-down NOx emissions using models that do not have an adjoint.

  8. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  9. On the role of self-adjointness in the continuum formulation of topological quantum phases

    NASA Astrophysics Data System (ADS)

    Tanhayi Ahari, Mostafa; Ortiz, Gerardo; Seradjeh, Babak

    2016-11-01

    Topological quantum phases of matter are characterized by an intimate relationship between the Hamiltonian dynamics away from the edges and the appearance of bound states localized at the edges of the system. Elucidating this correspondence in the continuum formulation of topological phases, even in the simplest case of a one-dimensional system, touches upon fundamental concepts and methods in quantum mechanics that are not commonly discussed in textbooks, in particular the self-adjoint extensions of a Hermitian operator. We show how such topological bound states can be derived in a prototypical one-dimensional system. Along the way, we provide a pedagogical exposition of the self-adjoint extension method as well as the role of symmetries in correctly formulating the continuum, field-theory description of topological matter with boundaries. Moreover, we show that self-adjoint extensions can be characterized generally in terms of a conserved local current associated with the self-adjoint operator.

  10. Adjoint Sensitivity Analysis of Orbital Mechanics: Application to Computations of Observables' Partials with Respect to Harmonics of the Planetary Gravity Fields

    NASA Technical Reports Server (NTRS)

    Ustinov, Eugene A.; Sunseri, Richard F.

    2005-01-01

    An approach is presented to the inversion of gravity fields based on evaluation of partials of observables with respect to gravity harmonics using the solution of adjoint problem of orbital dynamics of the spacecraft. Corresponding adjoint operator is derived directly from the linear operator of the linearized forward problem of orbital dynamics. The resulting adjoint problem is similar to the forward problem and can be solved by the same methods. For given highest degree N of gravity harmonics desired, this method involves integration of N adjoint solutions as compared to integration of N2 partials of the forward solution with respect to gravity harmonics in the conventional approach. Thus, for higher resolution gravity models, this approach becomes increasingly more effective in terms of computer resources as compared to the approach based on the solution of the forward problem of orbital dynamics.

  11. Two- and four-dimensional representations of the PT - and CPT -symmetric fermionic algebras

    NASA Astrophysics Data System (ADS)

    Beygi, Alireza; Klevansky, S. P.; Bender, Carl M.

    2018-03-01

    Fermionic systems differ from their bosonic counterparts, the main difference with regard to symmetry considerations being that T2=-1 for fermionic systems. In PT -symmetric quantum mechanics an operator has both PT and CPT adjoints. Fermionic operators η , which are quadratically nilpotent (η2=0 ), and algebras with PT and CPT adjoints can be constructed. These algebras obey different anticommutation relations: η ηPT+ηPTη =-1 , where ηPT is the PT adjoint of η , and η ηCPT+ηCPTη =1 , where ηCPT is the CPT adjoint of η . This paper presents matrix representations for the operator η and its PT and CPT adjoints in two and four dimensions. A PT -symmetric second-quantized Hamiltonian modeled on quantum electrodynamics that describes a system of interacting fermions and bosons is constructed within this framework and is solved exactly.

  12. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  13. Ritz method for transient response in systems having unsymmetric stiffness

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.

    1989-01-01

    The DMAP coding was automated to such an extent by using the device of bubble vectors, that it is useable for analyses in its present form. This feasibility study demonstrates that the Ritz Method is so compelling as to warrant coding its modules in FORTRAN and organizing the resulting coding into a new Rigid Format. Even though this Ritz technique was developed for unsymmetric stiffness matrices, it offers advantages to problems with symmetric stiffnesses. If used for the symmetric case the solution would be simplified to one set of modes, because the adjoint would be the same as the primary. Its advantage in either type of symmetry over a classical eigenvalue modal expansion is that information density per Ritz mode is far richer than per eigenvalue mode; thus far fewer modes would be needed for the same accuracy and every mode would actively participate in the response. Considerable economy can be realized in adapting Ritz vectors for modal solutions. This new Ritz capability now makes NASTRAN even more powerful than before.

  14. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  15. Admitting the Inadmissible: Adjoint Formulation for Incomplete Cost Functionals in Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Salas, Manuel D.

    1997-01-01

    We derive the adjoint equations for problems in aerodynamic optimization which are improperly considered as "inadmissible." For example, a cost functional which depends on the density, rather than on the pressure, is considered "inadmissible" for an optimization problem governed by the Euler equations. We show that for such problems additional terms should be included in the Lagrangian functional when deriving the adjoint equations. These terms are obtained from the restriction of the interior PDE to the control surface. Demonstrations of the explicit derivation of the adjoint equations for "inadmissible" cost functionals are given for the potential, Euler, and Navier-Stokes equations.

  16. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  17. Operator pencil passing through a given operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biggs, A., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk; Khudaverdian, H. M., E-mail: khudian@manchester.ac.uk, E-mail: adam.biggs@student.manchester.ac.uk

    Let Δ be a linear differential operator acting on the space of densities of a given weight λ{sub 0} on a manifold M. One can consider a pencil of operators Π-circumflex(Δ)=(Δ{sub λ}) passing through the operator Δ such that any Δ{sub λ} is a linear differential operator acting on densities of weight λ. This pencil can be identified with a linear differential operator Δ-circumflex acting on the algebra of densities of all weights. The existence of an invariant scalar product in the algebra of densities implies a natural decomposition of operators, i.e., pencils of self-adjoint and anti-self-adjoint operators. We studymore » lifting maps that are on one hand equivariant with respect to divergenceless vector fields, and, on the other hand, with values in self-adjoint or anti-self-adjoint operators. In particular, we analyze the relation between these two concepts, and apply it to the study of diff (M)-equivariant liftings. Finally, we briefly consider the case of liftings equivariant with respect to the algebra of projective transformations and describe all regular self-adjoint and anti-self-adjoint liftings. Our constructions can be considered as a generalisation of equivariant quantisation.« less

  18. Radiation Transport Tools for Space Applications: A Review

    NASA Technical Reports Server (NTRS)

    Jun, Insoo; Evans, Robin; Cherng, Michael; Kang, Shawn

    2008-01-01

    This slide presentation contains a brief discussion of nuclear transport codes widely used in the space radiation community for shielding and scientific analyses. Seven radiation transport codes that are addressed. The two general methods (i.e., Monte Carlo Method, and the Deterministic Method) are briefly reviewed.

  19. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  20. Predicting Ice Sheet and Climate Evolution at Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heimbach, Patrick

    2016-02-06

    A main research objectives of PISCEES is the development of formal methods for quantifying uncertainties in ice sheet modeling. Uncertainties in simulating and projecting mass loss from the polar ice sheets arise primarily from initial conditions, surface and basal boundary conditions, and model parameters. In general terms, two main chains of uncertainty propagation may be identified: 1. inverse propagation of observation and/or prior onto posterior control variable uncertainties; 2. forward propagation of prior or posterior control variable uncertainties onto those of target output quantities of interest (e.g., climate indices or ice sheet mass loss). A related goal is the developmentmore » of computationally efficient methods for producing initial conditions for an ice sheet that are close to available present-day observations and essentially free of artificial model drift, which is required in order to be useful for model projections (“initialization problem”). To be of maximum value, such optimal initial states should be accompanied by “useful” uncertainty estimates that account for the different sources of uncerainties, as well as the degree to which the optimum state is constrained by available observations. The PISCEES proposal outlined two approaches for quantifying uncertainties. The first targets the full exploration of the uncertainty in model projections with sampling-based methods and a workflow managed by DAKOTA (the main delivery vehicle for software developed under QUEST). This is feasible for low-dimensional problems, e.g., those with a handful of global parameters to be inferred. This approach can benefit from derivative/adjoint information, but it is not necessary, which is why it often referred to as “non-intrusive”. The second approach makes heavy use of derivative information from model adjoints to address quantifying uncertainty in high-dimensions (e.g., basal boundary conditions in ice sheet models). The use of local gradient, or Hessian information (i.e., second derivatives of the cost function), requires additional code development and implementation, and is thus often referred to as an “intrusive” approach. Within PISCEES, MIT has been tasked to develop methods for derivative-based UQ, the ”intrusive” approach discussed above. These methods rely on the availability of first (adjoint) and second (Hessian) derivative code, developed through intrusive methods such as algorithmic differentiation (AD). While representing a significant burden in terms of code development, derivative-baesd UQ is able to cope with very high-dimensional uncertainty spaces. That is, unlike sampling methods (all variations of Monte Carlo), calculational burden is independent of the dimension of the uncertainty space. This is a significant advantage for spatially distributed uncertainty fields, such as threedimensional initial conditions, three-dimensional parameter fields, or two-dimensional surface and basal boundary conditions. Importantly, uncertainty fields for ice sheet models generally fall into this category.« less

  1. Consideration of the Protection Curtain's Shielding Ability after Identifying the Source of Scattered Radiation in the Angiography.

    PubMed

    Sato, Naoki; Fujibuchi, Toshioh; Toyoda, Takatoshi; Ishida, Takato; Ohura, Hiroki; Miyajima, Ryuichi; Orita, Shinichi; Sueyoshi, Tomonari

    2017-06-15

    To decrease radiation exposure to medical staff performing angiography, the dose distribution in the angiography was calculated in room using the particle and heavy ion transport code system (PHITS), which is based on Monte Carlo code, and the source of scattered radiation was confirmed using a tungsten sheet by considering the difference shielding performance among different sheet placements. Scattered radiation generated from a flat panel detector, X-ray tube and bed was calculated using the PHITS. In this experiment, the source of scattered radiation was identified as the phantom or acrylic window attached to the X-ray tube thus, a protection curtain was placed on the bed to shield against scattered radiation at low positions. There was an average difference of 20% between the measured and calculated values. The H*(10) value decreased after placing the sheet on the right side of the phantom. Thus, the curtain could decrease scattered radiation. © Crown copyright 2016.

  2. LPT. Shield test facility (TAN645 and 646). Elevations show three ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-645 and -646). Elevations show three types of siding: Asbestos cement, pumice block, concrete. Ralph M. Parsons 1229-17 ANP/GE-6-6445-A-3. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-06445/0646-00-693-107349 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  3. LPT. Shield test facility (TAN646). Floor plan for water treatment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-646). Floor plan for water treatment room on west facade, tank and filter locations in basement along service tunnel and in coupling station. Ralph M. Parsons 1229-17 ANP/GE-6-646-P-2. April 1957. INEEL Index code no. 037-0645/0646-51-693-107387 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  4. Development of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.

  5. Evaluation of dosimetric properties of shielding disk used in intraoperative electron radiotherapy: A Monte Carlo study.

    PubMed

    Robatjazi, Mostafa; Baghani, Hamid Reza; Mahdavic, Seied Rabi; Felici, Giuseppe

    2018-05-01

    A shielding disk is used for IOERT procedures to absorb radiation behind the target and protect underlying healthy tissues. Setup variation of shielding disk can affect the corresponding in-vivo dose distribution. In this study, the changes of dosimetric parameters due to the disk setup variations is evaluated using EGSnrc Monte Carlo (MC) code. The results can help treatment team to decide about the level of accuracy in the setup procedure and delivered dose to the target volume during IOERT. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  7. Ambient noise adjoint tomography for a linear array in North China

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Yao, H.; Liu, Q.; Yuan, Y. O.; Zhang, P.; Feng, J.; Fang, L.

    2017-12-01

    Ambient noise tomography based on dispersion data and ray theory has been widely utilized for imaging crustal structures. In order to improve the inversion accuracy, ambient noise tomography based on the 3D adjoint approach or full waveform inversion has been developed recently, however, the computational cost is tremendous. In this study we present 2D ambient noise adjoint tomography for a linear array in north China with significant computational efficiency compared to 3D ambient noise adjoint tomography. During the preprocessing, we first convert the observed data in 3D media, i.e., surface-wave empirical Green's functions (EGFs) from ambient noise cross-correlation, to the reconstructed EGFs in 2D media using a 3D/2D transformation scheme. Different from the conventional steps of measuring phase dispersion, the 2D adjoint tomography refines 2D shear wave speeds along the profile directly from the reconstructed Rayleigh wave EGFs in the period band 6-35s. With the 2D initial model extracted from the 3D model from traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime misfits between the reconstructed EGFs and synthetic Green function (SGFs) in 2D media generated by the spectral-element method (SEM), with a preconditioned conjugate gradient method. The multitaper traveltime difference measurement is applied in four period bands during the inversion: 20-35s, 15-30s, 10-20s and 6-15s. The recovered model shows more detailed crustal structures with pronounced low velocity anomaly in the mid-lower crust beneath the junction of Taihang Mountains and Yin-Yan Mountains compared with the initial model. This low velocity structure may imply the possible intense crust-mantle interactions, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of the region. To our knowledge, it's first time that ambient noise adjoint tomography is implemented in 2D media. Considering the intensive computational cost and storage of 3D adjoint tomography, this 2D ambient noise adjoint tomography has potential advantages to get high-resolution 2D crustal structures with limited computational resource.

  8. 3D Space Radiation Transport in a Shielded ICRU Tissue Sphere

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2014-01-01

    A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.

  9. Considerations Concerning the Development and Testing of In-situ Materials for Martian Exploration

    NASA Technical Reports Server (NTRS)

    Kim, M.-H. Y.; Heilbronn, L.; Thibeault, S. A.; Simonsen, L. C.; Wilson, J. W.; Chang, K.; Kiefer, R. L.; Maahs, H. G.

    2000-01-01

    Natural Martian surface materials are evaluated for their potential use as radiation shields for manned Mars missions. The modified radiation fluences behind various kinds of Martian rocks and regolith are determined by solving the Boltzmann equation using NASA Langley s HZETRN code along with the 1977 Solar Minimum galactic cosmic ray environmental model. To make structural shielding composite materials from constituents of the Mars atmosphere and from Martian regolith for Martian surface habitats, schemes for synthesizing polyimide from the Mars atmosphere and for processing Martian regolith/polyimide composites are proposed. Theoretical predictions of the shielding properties of these composites are computed to assess their shielding effectiveness. Adding high-performance polymer binders to Martian regolith to enhance structural properties enhances the shielding properties of these composites because of the added hydrogenous constituents. Laboratory testing of regolith simulant/polyimide composites is planned to validate this prediction.

  10. Nuclear shielding constants by density functional theory with gauge including atomic orbitals

    NASA Astrophysics Data System (ADS)

    Helgaker, Trygve; Wilson, Philip J.; Amos, Roger D.; Handy, Nicholas C.

    2000-08-01

    Recently, we introduced a new density-functional theory (DFT) approach for the calculation of NMR shielding constants. First, a hybrid DFT calculation (using 5% exact exchange) is performed on the molecule to determine Kohn-Sham orbitals and their energies; second, the constants are determined as in nonhybrid DFT theory, that is, the paramagnetic contribution to the constants is calculated from a noniterative, uncoupled sum-over-states expression. The initial results suggested that this semiempirical DFT approach gives shielding constants in good agreement with the best ab initio and experimental data; in this paper, we further validate this procedure, using London orbitals in the theory, having implemented DFT into the ab initio code DALTON. Calculations on a number of small and medium-sized molecules confirm that our approach produces shieldings in excellent agreement with experiment and the best ab initio results available, demonstrating its potential for the study of shielding constants of large systems.

  11. An Analysis of Radiation Penetration through the U-Shaped Cast Concrete Joints of Concrete Shielding in the Multipurpose Gamma Irradiator of BATAN

    NASA Astrophysics Data System (ADS)

    Ardiyati, Tanti; Rozali, Bang; Kasmudin

    2018-02-01

    An analysis of radiation penetration through the U-shaped joints of cast concrete shielding in BATAN’s multipurpose gamma irradiator has been carried out. The analysis has been performed by calculating the radiation penetration through the U-shaped joints of the concrete shielding using MCNP computer code. The U-shaped joints were a new design in massive concrete construction in Indonesia and, in its actual application, it is joined by a bonding agent. In the MCNP simulation model, eight detectors were located close to the observed irradiation room walls of the concrete shielding. The simulation results indicated that the radiation levels outside the concrete shielding was less than the permissible limit of 2.5 μSv/h so that the workers could safely access electrical room, control room, water treatment facility and outside irradiation room. The radiation penetration decreased as the density of material increased.

  12. A simple model for molecular hydrogen chemistry coupled to radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Nickerson, Sarah; Teyssier, Romain; Rosdahl, Joakim

    2018-06-01

    We introduce non-equilibrium molecular hydrogen chemistry into the radiation-hydrodynamics code RAMSES-RT. This is an adaptive mesh refinement grid code with radiation hydrodynamics that couples the thermal chemistry of hydrogen and helium to moment-based radiative transfer with the Eddington tensor closure model. The H2 physics that we include are formation on dust grains, gas phase formation, formation by three-body collisions, collisional destruction, photodissociation, photoionisation, cosmic ray ionisation and self-shielding. In particular, we implement the first model for H2 self-shielding that is tied locally to moment-based radiative transfer by enhancing photo-destruction. This self-shielding from Lyman-Werner line overlap is critical to H2 formation and gas cooling. We can now track the non-equilibrium evolution of molecular, atomic, and ionised hydrogen species with their corresponding dissociating and ionising photon groups. Over a series of tests we show that our model works well compared to specialised photodissociation region codes. We successfully reproduce the transition depth between molecular and atomic hydrogen, molecular cooling of the gas, and a realistic Strömgren sphere embedded in a molecular medium. In this paper we focus on test cases to demonstrate the validity of our model on small scales. Our ultimate goal is to implement this in large-scale galactic simulations.

  13. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  14. Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.

    2014-01-01

    The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.

  15. Performance of a Light-Weight Ablative Thermal Protection Material for the Stardust Mission Sample Return Capsule

    NASA Technical Reports Server (NTRS)

    Covington, M. A.

    2005-01-01

    New tests and analyses are reported that were carried out to resolve testing uncertainties in the original development and qualification of a lightweight ablative material used for the Stardust spacecraft forebody heat shield. These additional arcjet tests and analyses confirmed the ablative and thermal performance of low density Phenolic Impregnated Carbon Ablator (PICA) material used for the Stardust design. Testing was done under conditions that simulate the peak convective heating conditions (1200 W/cm2 and 0.5 atm) expected during Earth entry of the Stardust Sample Return Capsule. Test data and predictions from an ablative material response computer code for the in-depth temperatures were compared to guide iterative adjustment of material thermophysical properties used in the code so that the measured and predicted temperatures agreed. The PICA recession rates and maximum internal temperatures were satisfactorily predicted by the computer code with the revised properties. Predicted recession rates were also in acceptable agreement with measured rates for heating conditions 37% greater than the nominal peak heating rate of 1200 W/sq cm. The measured in-depth temperature response data show consistent temperature rise deviations that may be caused by an undocumented endothermic process within the PICA material that is not accurately modeled by the computer code. Predictions of the Stardust heat shield performance based on the present evaluation provide evidence that the maximum adhesive bondline temperature will be much lower than the maximum allowable of 250 C and an earlier design prediction. The re-evaluation also suggests that even with a 25 percent increase in peak heating rates, the total recession of the heat shield would be a small fraction of the as-designed thickness. These results give confidence in the Stardust heat shield design and confirm the potential of PICA material for use in new planetary probe and sample return applications.

  16. Running coupling from gluon and ghost propagators in the Landau gauge: Yang-Mills theories with adjoint fermions

    NASA Astrophysics Data System (ADS)

    Bergner, Georg; Piemonte, Stefano

    2018-04-01

    Non-Abelian gauge theories with fermions transforming in the adjoint representation of the gauge group (AdjQCD) are a fundamental ingredient of many models that describe the physics beyond the Standard Model. Two relevant examples are N =1 supersymmetric Yang-Mills (SYM) theory and minimal walking technicolor, which are gauge theories coupled to one adjoint Majorana and two adjoint Dirac fermions, respectively. While confinement is a property of N =1 SYM, minimal walking technicolor is expected to be infrared conformal. We study the propagators of ghost and gluon fields in the Landau gauge to compute the running coupling in the MiniMom scheme. We analyze several different ensembles of lattice Monte Carlo simulations for the SU(2) adjoint QCD with Nf=1 /2 ,1 ,3 /2 , and 2 Dirac fermions. We show how the running of the coupling changes as the number of interacting fermions is increased towards the conformal window.

  17. The continuous adjoint approach to the k-ε turbulence model for shape optimization and optimal active control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Papoutsis-Kiachagias, E. M.; Zymaris, A. S.; Kavvadias, I. S.; Papadimitriou, D. I.; Giannakoglou, K. C.

    2015-03-01

    The continuous adjoint to the incompressible Reynolds-averaged Navier-Stokes equations coupled with the low Reynolds number Launder-Sharma k-ε turbulence model is presented. Both shape and active flow control optimization problems in fluid mechanics are considered, aiming at minimum viscous losses. In contrast to the frequently used assumption of frozen turbulence, the adjoint to the turbulence model equations together with appropriate boundary conditions are derived, discretized and solved. This is the first time that the adjoint equations to the Launder-Sharma k-ε model have been derived. Compared to the formulation that neglects turbulence variations, the impact of additional terms and equations is evaluated. Sensitivities computed using direct differentiation and/or finite differences are used for comparative purposes. To demonstrate the need for formulating and solving the adjoint to the turbulence model equations, instead of merely relying upon the 'frozen turbulence assumption', the gain in the optimization turnaround time offered by the proposed method is quantified.

  18. Technical Note: Adjoint formulation of the TOMCAT atmospheric transport scheme in the Eulerian backtracking framework (RETRO-TOM)

    NASA Astrophysics Data System (ADS)

    Haines, P. E.; Esler, J. G.; Carver, G. D.

    2014-06-01

    A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.

  19. Technical Note: Adjoint formulation of the TOMCAT atmospheric transport scheme in the Eulerian backtracking framework (RETRO-TOM)

    NASA Astrophysics Data System (ADS)

    Haines, P. E.; Esler, J. G.; Carver, G. D.

    2014-01-01

    A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments), to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time-symmetric suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux-limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.

  20. On the theory of self-adjoint extensions of symmetric operators and its applications to quantum physics

    NASA Astrophysics Data System (ADS)

    Ibort, A.; Pérez-Pardo, J. M.

    2015-04-01

    This is a series of five lectures around the common subject of the construction of self-adjoint extensions of symmetric operators and its applications to Quantum Physics. We will try to offer a brief account of some recent ideas in the theory of self-adjoint extensions of symmetric operators on Hilbert spaces and their applications to a few specific problems in Quantum Mechanics.

  1. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  2. Optimization of wind plant layouts using an adjoint approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N.; Dykes, Katherine; Graf, Peter

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  3. IET. Coupling station (TAN620), plans and sections. Concrete shielding walls ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Coupling station (TAN-620), plans and sections. Concrete shielding walls and boron surface treatment. Elevation shows two floor levels, position of periscopes, and stairways. Ralph M. Parsons 902-4-ANP-602-A 325. Date: February 1954. Approved by INEEL Classification Office for public release. INEEL index code no. 035-0620-00-693-106910 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  4. LPT. Shield test facility (TAN645 and 646). Sections show relationships ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Shield test facility (TAN-645 and -646). Sections show relationships among control rooms, coupling station, counting rooms, pools, equipment rooms, data room and other areas. Ralph M. Parsons 1229-17 ANP/GE-6-645-A-4. April 1957. Approved by INEEL Classification Office for public release. INEEL index code no. 037-0645/0646-00-693-107350 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  5. 49 CFR 178.276 - Requirements for the design, construction, inspection and testing of portable tanks intended for...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....): (A) Without insulation or sun shield: 60 °C (140 °F); (B) With sun shield: 55 °C (131 °F); and (C) With insulation: 50 °C (122 °F). (3) Filling density means the average mass of liquefied compressed gas... stamped in accordance with the ASME Code, Section VIII. (2) Portable tanks must be postweld heat-treated...

  6. 49 CFR 178.276 - Requirements for the design, construction, inspection and testing of portable tanks intended for...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....): (A) Without insulation or sun shield: 60 °C (140 °F); (B) With sun shield: 55 °C (131 °F); and (C) With insulation: 50 °C (122 °F). (3) Filling density means the average mass of liquefied compressed gas... stamped in accordance with the ASME Code, Section VIII. (2) Portable tanks must be postweld heat-treated...

  7. 49 CFR 178.276 - Requirements for the design, construction, inspection and testing of portable tanks intended for...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....): (A) Without insulation or sun shield: 60 °C (140 °F); (B) With sun shield: 55 °C (131 °F); and (C) With insulation: 50 °C (122 °F). (3) Filling density means the average mass of liquefied compressed gas... stamped in accordance with the ASME Code, Section VIII. (2) Portable tanks must be postweld heat-treated...

  8. 49 CFR 178.276 - Requirements for the design, construction, inspection and testing of portable tanks intended for...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....): (A) Without insulation or sun shield: 60 °C (140 °F); (B) With sun shield: 55 °C (131 °F); and (C) With insulation: 50 °C (122 °F). (3) Filling density means the average mass of liquefied compressed gas... stamped in accordance with the ASME Code, Section VIII. (2) Portable tanks must be postweld heat-treated...

  9. 49 CFR 178.276 - Requirements for the design, construction, inspection and testing of portable tanks intended for...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....): (A) Without insulation or sun shield: 60 °C (140 °F); (B) With sun shield: 55 °C (131 °F); and (C) With insulation: 50 °C (122 °F). (3) Filling density means the average mass of liquefied compressed gas... stamped in accordance with the ASME Code, Section VIII. (2) Portable tanks must be postweld heat-treated...

  10. Estimates of galactic cosmic ray shielding requirements during solar minimum

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Nealy, John E.; Wilson, John W.; Simonsen, Lisa C.

    1990-01-01

    Estimates of radiation risk from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different arbitrary constituents per layer. Calculated galactic cosmic ray fluxes, dose and dose equivalents behind various thicknesses of aluminum, water and liquid hydrogen shielding are presented for the solar minimum period. Estimates of risk to the skin and the blood-forming organs (BFO) are made using 0-cm and 5-cm depth dose/dose equivalent values, respectively, for water. These results indicate that at least 3.5 g/sq cm (3.5 cm) of water, or 6.5 g/sq cm (2.4 cm) of aluminum, or 1.0 g/sq cm (14 cm) of liquid hydrogen shielding is required to reduce the annual exposure below the currently recommended BFO limit of 0.5 Sv. Because of large uncertainties in fragmentation parameters and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as a factor of 2 or more. The effects of these potential exposure uncertainties or shield thickness requirements are analyzed.

  11. BUMPERII - DESIGN ANALYSIS CODE FOR OPTIMIZING SPACECRAFT SHIELDING AND WALL CONFIGURATION FOR ORBITAL DEBRIS AND METEOROID IMPACTS

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1994-01-01

    BUMPERII is a modular program package employing a numerical solution technique to calculate a spacecraft's probability of no penetration (PNP) from man-made orbital debris or meteoroid impacts. The solution equation used to calculate the PNP is based on the Poisson distribution model for similar analysis of smaller craft, but reflects the more rigorous mathematical modeling of spacecraft geometry, orientation, and impact characteristics necessary for treatment of larger structures such as space station components. The technique considers the spacecraft surface in terms of a series of flat plate elements. It divides the threat environment into a number of finite cases, then evaluates each element of each threat. The code allows for impact shielding (shadowing) of one element by another in various configurations over the spacecraft exterior, and also allows for the effects of changing spacecraft flight orientation and attitude. Four main modules comprise the overall BUMPERII package: GEOMETRY, RESPONSE, SHIELD, and CONTOUR. The GEOMETRY module accepts user-generated finite element model (FEM) representations of the spacecraft geometry and creates geometry databases for both meteoroid and debris analysis. The GEOMETRY module expects input to be in either SUPERTAB Universal File Format or PATRAN Neutral File Format. The RESPONSE module creates wall penetration response databases, one for meteoroid analysis and one for debris analysis, for up to 100 unique wall configurations. This module also creates a file containing critical diameter as a function of impact velocity and impact angle for each wall configuration. The SHIELD module calculates the PNP for the modeled structure given exposure time, operating altitude, element ID ranges, and the data from the RESPONSE and GEOMETRY databases. The results appear in a summary file. SHIELD will also determine the effective area of the components and the overall model, and it can produce a data file containing the probability of penetration values per surface area for each element in the model. The SHIELD module writes this data file in either SUPERTAB Universal File Format or PATRAN Neutral File Format so threat contour plots can be generated as a post-processing feature of the FEM programs SUPERTAB and PATRAN. The CONTOUR module combines the functions of the RESPONSE module and most of the SHIELD module functions allowing determination of ranges of PNP's by looping over ranges of shield and/or wall thicknesses. A data file containing the PNP's for the corresponding shield and vessel wall thickness is produced. Users may perform sensitivity studies of two kinds. The effects of simple variations in orbital time, surface area, and flux may be analyzed by making changes to the terms in the equation representing the average number of penetrating particles per unit time in the PNP solution equation. The package analyzes other changes, including model environment, surface area, and configuration, by re-running the solution sequence with new GEOMETRY and RESPONSE data. BUMPERII can be run with no interactive output to the screen during execution. This can be particularly useful during batch runs. BUMPERII is written in FORTRAN 77 for DEC VAX series computers running under VMS, and was written for use with the finite-element model code SUPERTAB or PATRAN as both a pre-processor and a post-processor. Use of an alternate FEM code will require either development of a translator to change data format or modification of the GEOMETRY subroutine in BUMPERII. This program is available in DEC VAX BACKUP format on a 9-track 1600 BPI magnetic tape (standard distribution media) or on TK50 tape cartridge. The original BUMPER code was developed in 1988 with the BUMPERII revisions following in 1991 and 1992. SUPERTAB is a former name for I-DEAS. I-DEAS Finite Element Modeling is a trademark of Structural Dynamics Research Corporation. DEC, VAX, VMS and TK50 are trademarks of Digital Equipment Corporation.

  12. Validation of a multi-layer Green's function code for ion beam transport

    NASA Astrophysics Data System (ADS)

    Walker, Steven; Tweed, John; Tripathi, Ram; Badavi, Francis F.; Miller, Jack; Zeitlin, Cary; Heilbronn, Lawrence

    To meet the challenge of future deep space programs, an accurate and efficient engineering code for analyzing the shielding requirements against high-energy galactic heavy radiations is needed. In consequence, a new version of the HZETRN code capable of simulating high charge and energy (HZE) ions with either laboratory or space boundary conditions is currently under development. The new code, GRNTRN, is based on a Green's function approach to the solution of Boltzmann's transport equation and like its predecessor is deterministic in nature. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and down shift. Code validation in the laboratory environment is addressed by showing that GRNTRN accurately predicts energy loss spectra as measured by solid-state detectors in ion beam experiments with multi-layer targets. In order to validate the code with space boundary conditions, measured particle fluences are propagated through several thicknesses of shielding using both GRNTRN and the current version of HZETRN. The excellent agreement obtained indicates that GRNTRN accurately models the propagation of HZE ions in the space environment as well as in laboratory settings and also provides verification of the HZETRN propagator.

  13. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theoristsmore » alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  14. Evaluation Of Shielding Efficacy Of A Ferrite Containing Ceramic Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verst, C.

    2015-10-12

    The shielding evaluation of the ferrite based Mitsuishi ceramic material has produced for several radiation sources and possible shielding sizes comparative dose attenuation measurements and simulated projections. High resolution gamma spectroscopy provided uncollided and scattered photon spectra at three energies, confirming theoretical estimates of the ceramic’s mass attenuation coefficient, μ/ρ. High level irradiation experiments were performed using Co-60, Cs-137, and Cf-252 sources to measure penetrating dose rates through steel, lead, concrete, and the provided ceramic slabs. The results were used to validate the radiation transport code MCNP6 which was then used to generate dose rate attenuation curves as a functionmore » of shielding material, thickness, and mass for photons and neutrons ranging in energy from 200 keV to 2 MeV.« less

  15. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  16. Implementing Shared Memory Parallelism in MCBEND

    NASA Astrophysics Data System (ADS)

    Bird, Adam; Long, David; Dobson, Geoff

    2017-09-01

    MCBEND is a general purpose radiation transport Monte Carlo code from AMEC Foster Wheelers's ANSWERS® Software Service. MCBEND is well established in the UK shielding community for radiation shielding and dosimetry assessments. The existing MCBEND parallel capability effectively involves running the same calculation on many processors. This works very well except when the memory requirements of a model restrict the number of instances of a calculation that will fit on a machine. To more effectively utilise parallel hardware OpenMP has been used to implement shared memory parallelism in MCBEND. This paper describes the reasoning behind the choice of OpenMP, notes some of the challenges of multi-threading an established code such as MCBEND and assesses the performance of the parallel method implemented in MCBEND.

  17. Effect of particle size and percentages of Boron carbide on the thermal neutron radiation shielding properties of HDPE/B4C composite: Experimental and simulation studies

    NASA Astrophysics Data System (ADS)

    Soltani, Zahra; Beigzadeh, Amirmohammad; Ziaie, Farhood; Asadi, Eskandar

    2016-10-01

    In this paper the effects of particle size and weight percentage of the reinforcement phase on the absorption ability of thermal neutron by HDPE/B4C composites were investigated by means of Monte-Carlo simulation method using MCNP code and experimental studies. The composite samples were prepared using the HDPE filled with different weight percentages of Boron carbide powder in the form of micro and nano particles. Micro and nano composite were prepared under the similar mixing and moulding processes. The samples were subjected to thermal neutron radiation. Neutron shielding efficiency in terms of the neutron transmission fractions of the composite samples were investigated and compared with simulation results. According to the simulation results, the particle size of the radiation shielding material has an important role on the shielding efficiency. By decreasing the particle size of shielding material in each weight percentages of the reinforcement phase, better radiation shielding properties were obtained. It seems that, decreasing the particle size and homogeneous distribution of nano forms of B4C particles, cause to increase the collision probability between the incident thermal neutron and the shielding material which consequently improve the radiation shielding properties. So, this result, propose the feasibility of nano composite as shielding material to have a high performance shielding characteristic, low weight and low thick shielding along with economical benefit.

  18. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes

    NASA Astrophysics Data System (ADS)

    Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.

  19. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  20. Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences

    NASA Astrophysics Data System (ADS)

    Tajima, Masato; Okino, Koji; Miyagoshi, Takashi

    In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.

  1. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    NASA Astrophysics Data System (ADS)

    Stupakov, Gennady; Zhou, Demin

    2016-04-01

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  2. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stupakov, Gennady; Zhou, Demin

    2016-04-21

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. All our formulas are benchmarked against numerical simulations with the CSRZ computer code.

  3. Performance Study of Monte Carlo Codes on Xeon Phi Coprocessors — Testing MCNP 6.1 and Profiling ARCHER Geometry Module on the FS7ONNi Problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Wolfe, Noah; Lin, Hui; Zieb, Kris; Ji, Wei; Caracappa, Peter; Carothers, Christopher; Xu, X. George

    2017-09-01

    This paper contains two parts revolving around Monte Carlo transport simulation on Intel Many Integrated Core coprocessors (MIC, also known as Xeon Phi). (1) MCNP 6.1 was recompiled into multithreading (OpenMP) and multiprocessing (MPI) forms respectively without modification to the source code. The new codes were tested on a 60-core 5110P MIC. The test case was FS7ONNi, a radiation shielding problem used in MCNP's verification and validation suite. It was observed that both codes became slower on the MIC than on a 6-core X5650 CPU, by a factor of 4 for the MPI code and, abnormally, 20 for the OpenMP code, and both exhibited limited capability of strong scaling. (2) We have recently added a Constructive Solid Geometry (CSG) module to our ARCHER code to provide better support for geometry modelling in radiation shielding simulation. The functions of this module are frequently called in the particle random walk process. To identify the performance bottleneck we developed a CSG proxy application and profiled the code using the geometry data from FS7ONNi. The profiling data showed that the code was primarily memory latency bound on the MIC. This study suggests that despite low initial porting e_ort, Monte Carlo codes do not naturally lend themselves to the MIC platform — just like to the GPUs, and that the memory latency problem needs to be addressed in order to achieve decent performance gain.

  4. Vapor shielding models and the energy absorbed by divertor targets during transient events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skovorodin, D. I., E-mail: dskovorodin@gmail.com; Arakcheev, A. S.; Pshenov, A. A.

    2016-02-15

    The erosion of divertor targets caused by high heat fluxes during transients is a serious threat to ITER operation, as it is going to be the main factor determining the divertor lifetime. Under the influence of extreme heat fluxes, the surface temperature of plasma facing components can reach some certain threshold, leading to an onset of intense material evaporation. The latter results in formation of cold dense vapor and secondary plasma cloud. This layer effectively absorbs the energy of the incident plasma flow, turning it into its own kinetic and internal energy and radiating it. This so called vapor shieldingmore » is a phenomenon that may help mitigating the erosion during transient events. In particular, the vapor shielding results in saturation of energy (per unit surface area) accumulated by the target during single pulse of heat load at some level E{sub max}. Matching this value is one of the possible tests to verify complicated numerical codes, developed to calculate the erosion rate during abnormal events in tokamaks. The paper presents three very different models of vapor shielding, demonstrating that E{sub max} depends strongly on the heat pulse duration, thermodynamic properties, and evaporation energy of the irradiated target material. While its dependence on the other shielding details such as radiation capabilities of material and dynamics of the vapor cloud is logarithmically weak. The reason for this is a strong (exponential) dependence of the target material evaporation rate, and therefore the “strength” of vapor shield on the target surface temperature. As a result, the influence of the vapor shielding phenomena details, such as radiation transport in the vapor cloud and evaporated material dynamics, on the E{sub max} is virtually completely masked by the strong dependence of the evaporation rate on the target surface temperature. However, the very same details define the amount of evaporated particles, needed to provide an effective shielding to the target, and, therefore, strongly influence resulting erosion rate. Thus, E{sub max} cannot be used for validation of shielding models and codes, aimed at the target material erosion calculations.« less

  5. Inverse Regional Modeling with Adjoint-Free Technique

    NASA Astrophysics Data System (ADS)

    Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.

    2016-02-01

    The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.

  6. Transient sensitivities of sea ice export through the Canadian Arctic Archipelago inferred from a coupled ocean/sea-ice adjoint model

    NASA Astrophysics Data System (ADS)

    Heimbach, P.; Losch, M.; Menemenlis, D.; Campin, J.; Hill, C.

    2008-12-01

    The sensitivity of sea-ice export through the Canadian Arctic Archipelago (CAA), measured in terms of its solid freshwater export through Lancaster Sound, to changes in various elements of the ocean and sea-ice state, and to elements of the atmospheric forcing fields through time and space is assessed by means of a coupled ocean/sea-ice adjoint model. The adjoint model furnishes full spatial sensitivity maps (also known as Lagrange multipliers) of the export metric to a variety of model variables at any chosen point in time, providing the unique capability to quantify major drivers of sea-ice export variability. The underlying model is the MIT ocean general circulation model (MITgcm), which is coupled to a Hibler-type dynamic/thermodynamic sea-ice model. The configuration is based on the Arctic face of the ECCO3 high-resolution cubed-sphere model, but coarsened to 36-km horizontal grid spacing. The adjoint of the coupled system has been derived by means of automatic differentiation using the software tool TAF. Finite perturbation simulations are performed to check the information provided by the adjoint. The sea-ice model's performance in the presence of narrow straits is assessed with different sea-ice lateral boundary conditions. The adjoint sensitivity clearly exposes the role of the model trajectory and the transient nature of the problem. The complex interplay between forcing, dynamics, and boundary condition is demonstrated in the comparison between the different calculations. The study is a step towards fully coupled adjoint-based ocean/sea-ice state estimation at basin to global scales as part of the ECCO efforts.

  7. Global Seismic Imaging Based on Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Lefebvre, M.; Lei, W.; Peter, D. B.; Smith, J. A.; Zhu, H.; Komatitsch, D.; Tromp, J.

    2013-12-01

    Our aim is to perform adjoint tomography at the scale of globe to image the entire planet. We have started elastic inversions with a global data set of 253 CMT earthquakes with moment magnitudes in the range 5.8 ≤ Mw ≤ 7 and used GSN stations as well as some local networks such as USArray, European stations, etc. Using an iterative pre-conditioned conjugate gradient scheme, we initially set the aim to obtain a global crustal and mantle model with confined transverse isotropy in the upper mantle. Global adjoint tomography has so far remained a challenge mainly due to computational limitations. Recent improvements in our 3D solvers (e.g., a GPU version) and access to high-performance computational centers (e.g., ORNL's Cray XK7 "Titan" system) now enable us to perform iterations with higher-resolution (T > 9 s) and longer-duration (200 min) simulations to accommodate high-frequency body waves and major-arc surface waves, respectively, which help improve data coverage. The remaining challenge is the heavy I/O traffic caused by the numerous files generated during the forward/adjoint simulations and the pre- and post-processing stages of our workflow. We improve the global adjoint tomography workflow by adopting the ADIOS file format for our seismic data as well as models, kernels, etc., to improve efficiency on high-performance clusters. Our ultimate aim is to use data from all available networks and earthquakes within the magnitude range of our interest (5.5 ≤ Mw ≤ 7) which requires a solid framework to manage big data in our global adjoint tomography workflow. We discuss the current status and future of global adjoint tomography based on our initial results as well as practical issues such as handling big data in inversions and on high-performance computing systems.

  8. Journalists’ Privilege to Withhold Information in Judicial and Other Proceedings: State Shield Statutes

    DTIC Science & Technology

    2005-03-08

    Congressional Research Service ˜ The Library of Congress CRS Report for Congress Received through the CRS Web Order Code RL32806 Journalists ...currently valid OMB control number. 1. REPORT DATE 08 MAR 2005 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Journalists ...State Shield Statutes Summary Absent a statutory or constitutional recognition of journalistic privilege, a reporter may be compelled to testify in legal

  9. The Local Tissue Environment During the September 29, 1989 Solar Particle Event

    NASA Technical Reports Server (NTRS)

    Kim, M.-H. Y.; Wilson, J. W.; Cucinotta, F. A.; Simonsen, L. C.; Atwell, W.; Badavi, F. F.; Miller, J.

    2004-01-01

    The solar particle event (SPE) of September 29, 1989, produced an iron-rich spectrum with energies approaching 1 GeV/amu with an energy power index of 2.5. These high charge and energy (HZE) ions of the iron-rich SPEs challenge conventional methods of SPE shield design and assessment of astronaut risks. Shield and risk assessments are evaluated using the HZETRN code with computerized anatomical man (CAM) model for astronaut s body tissues. Since the HZE spectra decline rapidly with energy and HZE attenuation in materials is limited by their penetration power, details of the mass distributions about the sensitive tissues (shielding materials and the astronaut's body) are important determining factors of the exposure levels. Typical space suit and lightly shielded structures allow significant contributions from HZE components to some critical body tissues and have important implications on the models for risk assessment. Only a heavily shielded equipment room of a space vehicle or habitat provides sufficient shielding for the early response at sensitive organs from this event. The February 23, 1956 event of similar spectral characteristics and ten times this event may have important medical consequences without a well-shielded region.

  10. Cloud immersion building shielding factors for US residential structures.

    PubMed

    Dickson, E D; Hamby, D M

    2014-12-01

    This paper presents validated building shielding factors designed for contemporary US housing-stock under an idealized, yet realistic, exposure scenario within a semi-infinite cloud of radioactive material. The building shielding factors are intended for use in emergency planning and level three probabilistic risk assessments for a variety of postulated radiological events in which a realistic assessment is necessary to better understand the potential risks for accident mitigation and emergency response planning. Factors are calculated from detailed computational housing-units models using the general-purpose Monte Carlo N-Particle computational code, MCNP5, and are benchmarked from a series of narrow- and broad-beam measurements analyzing the shielding effectiveness of ten common general-purpose construction materials and ten shielding models representing the primary weather barriers (walls and roofs) of likely US housing-stock. Each model was designed to scale based on common residential construction practices and include, to the extent practical, all structurally significant components important for shielding against ionizing radiation. Calculations were performed for floor-specific locations as well as for computing a weighted-average representative building shielding factor for single- and multi-story detached homes, both with and without basement, as well for single-wide manufactured housing-units.

  11. Radiation Exposure Analyses Supporting the Development of Solar Particle Event Shielding Technologies

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Clowdsley, Martha S.; Abston, H. Lee; Simon, Hatthew A.; Gallegos, Adam M.

    2013-01-01

    NASA has plans for long duration missions beyond low Earth orbit (LEO). Outside of LEO, large solar particle events (SPEs), which occur sporadically, can deliver a very large dose in a short amount of time. The relatively low proton energies make SPE shielding practical, and the possibility of the occurrence of a large event drives the need for SPE shielding for all deep space missions. The Advanced Exploration Systems (AES) RadWorks Storm Shelter Team was charged with developing minimal mass SPE storm shelter concepts for missions beyond LEO. The concepts developed included "wearable" shields, shelters that could be deployed at the onset of an event, and augmentations to the crew quarters. The radiation transport codes, human body models, and vehicle geometry tools contained in the On-Line Tool for the Assessment of Radiation In Space (OLTARIS) were used to evaluate the protection provided by each concept within a realistic space habitat and provide the concept designers with shield thickness requirements. Several different SPE models were utilized to examine the dependence of the shield requirements on the event spectrum. This paper describes the radiation analysis methods and the results of these analyses for several of the shielding concepts.

  12. An adjoint view on flux consistency and strong wall boundary conditions to the Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stück, Arthur, E-mail: arthur.stueck@dlr.de

    2015-11-15

    Inconsistent discrete expressions in the boundary treatment of Navier–Stokes solvers and in the definition of force objective functionals can lead to discrete-adjoint boundary treatments that are not a valid representation of the boundary conditions to the corresponding adjoint partial differential equations. The underlying problem is studied for an elementary 1D advection–diffusion problem first using a node-centred finite-volume discretisation. The defect of the boundary operators in the inconsistently defined discrete-adjoint problem leads to oscillations and becomes evident with the additional insight of the continuous-adjoint approach. A homogenisation of the discretisations for the primal boundary treatment and the force objective functional yieldsmore » second-order functional accuracy and eliminates the defect in the discrete-adjoint boundary treatment. Subsequently, the issue is studied for aerodynamic Reynolds-averaged Navier–Stokes problems in conjunction with a standard finite-volume discretisation on median-dual grids and a strong implementation of noslip walls, found in many unstructured general-purpose flow solvers. Going out from a base-line discretisation of force objective functionals which is independent of the boundary treatment in the flow solver, two improved flux-consistent schemes are presented; based on either body wall-defined or farfield-defined control-volumes they resolve the dual inconsistency. The behaviour of the schemes is investigated on a sequence of grids in 2D and 3D.« less

  13. A Full-Core Resonance Self-Shielding Method Using a Continuous-Energy Quasi–One-Dimensional Slowing-Down Solution that Accounts for Temperature-Dependent Fuel Subregions and Resonance Interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Martin, William; Williams, Mark

    In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less

  14. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  15. The coupling of the neutron transport application RATTLESNAKE to the nuclear fuels performance application BISON under the MOOSE framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gleicher, Frederick N.; Williamson, Richard L.; Ortensi, Javier

    The MOOSE neutron transport application RATTLESNAKE was coupled to the fuels performance application BISON to provide a higher fidelity tool for fuel performance simulation. This project is motivated by the desire to couple a high fidelity core analysis program (based on the self-adjoint angular flux equations) to a high fidelity fuel performance program, both of which can simulate on unstructured meshes. RATTLESNAKE solves self-adjoint angular flux transport equation and provides a sub-pin level resolution of the multigroup neutron flux with resonance treatment during burnup or a fast transient. BISON solves the coupled thermomechanical equations for the fuel on a sub-millimetermore » scale. Both applications are able to solve their respective systems on aligned and unaligned unstructured finite element meshes. The power density and local burnup was transferred from RATTLESNAKE to BISON with the MOOSE Multiapp transfer system. Multiple depletion cases were run with one-way data transfer from RATTLESNAKE to BISON. The eigenvalues are shown to agree well with values obtained from the lattice physics code DRAGON. The one-way data transfer of power density is shown to agree with the power density obtained from an internal Lassman-style model in BISON.« less

  16. Suborbital spaceplane optimization using non-stationary Gaussian processes

    NASA Astrophysics Data System (ADS)

    Dufour, Robin; de Muelenaere, Julien; Elham, Ali

    2014-10-01

    This paper presents multidisciplinary design optimization of a sub-orbital spaceplane. The optimization includes three disciplines: the aerodynamics, the structure and the trajectory. An Adjoint Euler code is used to calculate the aerodynamic lift and drag of the vehicle as well as their derivatives with respect to the design variables. A new surrogate model has been developed based on a non-stationary Gaussian process. That model was used to estimate the aerodynamic characteristics of the vehicle during the trajectory optimization. The trajectory of thevehicle has been optimized together with its geometry in order to maximize the amount of payload that can be carried by the spaceplane.

  17. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  18. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  19. BUGJEFF311.BOLIB (JEFF-3.1.1) and BUGENDF70.BOLIB (ENDF/B-VII.0) - Generation Methodology and Preliminary Testing of two ENEA-Bologna Group Cross Section Libraries for LWR Shielding and Pressure Vessel Dosimetry

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Sinitsa, Valentin; Orsi, Roberto; Frisoni, Manuela

    2016-02-01

    Two broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format, dedicated to LWR shielding and pressure vessel dosimetry applications, were generated following the methodology recommended by the US ANSI/ANS-6.1.2-1999 (R2009) standard. These libraries, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, are respectively based on JEFF-3.1.1 and ENDF/B-VII.0 nuclear data and adopt the same broad-group energy structure (47 n + 20 γ) of the ORNL BUGLE-96 similar library. They were respectively obtained from the ENEA-Bologna VITJEFF311.BOLIB and VITENDF70.BOLIB libraries in AMPX format for nuclear fission applications through problem-dependent cross section collapsing with the ENEA-Bologna 2007 revision of the ORNL SCAMPI nuclear data processing system. Both previous libraries are based on the Bondarenko self-shielding factor method and have the same AMPX format and fine-group energy structure (199 n + 42 γ) as the ORNL VITAMIN-B6 similar library from which BUGLE-96 was obtained at ORNL. A synthesis of a preliminary validation of the cited BUGLE-type libraries, performed through 3D fixed source transport calculations with the ORNL TORT-3.2 SN code, is included. The calculations were dedicated to the PCA-Replica 12/13 and VENUS-3 engineering neutron shielding benchmark experiments, specifically conceived to test the accuracy of nuclear data and transport codes in LWR shielding and radiation damage analyses.

  20. Monte Carlo Shielding Comparative Analysis Applied to TRIGA HEU and LEU Spent Fuel Transport

    NASA Astrophysics Data System (ADS)

    Margeanu, C. A.; Margeanu, S.; Barbos, D.; Iorgulis, C.

    2010-12-01

    The paper is a comparative study of LEU and HEU fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask, by means of 3D Monte Carlo MORSE-SGC code. Before loading into the shipping cask, TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. The actinides contribution to total fuel radioactivity is very low in HEU spent fuel case, becoming 10 times greater in LEU spent fuel case. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than HEU ones. Comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from cask surface (about 15% relative difference).

  1. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  2. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  3. Shielding design of an underground experimental area at point 5 of the CERN Super Proton Synchrotron (SPS).

    PubMed

    Mueller, Mario J; Stevenson, Graham R

    2005-01-01

    Increasing projected values of the circulating beam intensity in the Super Proton Synchrotron (SPS) and decreasing limits to radiation exposure, taken with the increasing non-acceptance of unjustified and unoptimised radiation exposures, have led to the need to re-assess the shielding between the ECX and ECA5 underground experimental areas of the SPS. Twenty years ago, these experimental areas at SPS-Point 5 housed the UA1 experiment, where Carlo Rubbia and his team verified the existence of W and Z bosons. The study reported here describes such a re-assessment based on simulations using the multi-purpose FLUKA radiation transport code. This study concludes that while the main shield which is made of concrete blocks and is 4.8 m thick satisfactorily meets the current design limits even at the highest intensities presently planned for the SPS, dose rates calculated for liaison areas on both sides of the main shield significantly exceed the design limits. Possible ways of improving the shielding situation are discussed.

  4. Numerical simulation of inductive method for determining spatial distribution of critical current density

    NASA Astrophysics Data System (ADS)

    Kamitani, A.; Takayama, T.; Tanaka, A.; Ikuno, S.

    2010-11-01

    The inductive method for measuring the critical current density jC in a high-temperature superconducting (HTS) thin film has been investigated numerically. In order to simulate the method, a non-axisymmetric numerical code has been developed for analyzing the time evolution of the shielding current density. In the code, the governing equation of the shielding current density is spatially discretized with the finite element method and the resulting first-order ordinary differential system is solved by using the 5th-order Runge-Kutta method with an adaptive step-size control algorithm. By using the code, the threshold current IT is evaluated for various positions of a coil. The results of computations show that, near a film edge, the accuracy of the estimating formula for jC is remarkably degraded. Moreover, even the proportional relationship between jC and IT will be lost there. Hence, the critical current density near a film edge cannot be estimated by using the inductive method.

  5. SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research

    NASA Astrophysics Data System (ADS)

    Bassler, N.; Hansen, D. C.; Lühr, A.; Thomsen, B.; Petersen, J. B.; Sobolevsky, N.

    2014-03-01

    Purpose: The Monte Carlo (MC) code SHIELD-HIT simulates the transport of ions through matter. Since SHIELD-HIT08 we added numerous features that improves speed, usability and underlying physics and thereby the user experience. The "-A" fork of SHIELD-HIT also aims to attach SHIELD-HIT to a heavy ion dose optimization algorithm to provide MC-optimized treatment plans that include radiobiology. Methods: SHIELD-HIT12A is written in FORTRAN and carefully retains platform independence. A powerful scoring engine is implemented scoring relevant quantities such as dose and track-average LET. It supports native formats compatible with the heavy ion treatment planning system TRiP. Stopping power files follow ICRU standard and are generated using the libdEdx library, which allows the user to choose from a multitude of stopping power tables. Results: SHIELD-HIT12A runs on Linux and Windows platforms. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More information about SHIELD-HIT12A and a demo version can be found on http://www.shieldhit.org.

  6. LOFT. Containment and service building (TAN650). Section through north/south axis. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Containment and service building (TAN-650). Section through north/south axis. Shows basement and four additional levels of pre-amp tower, shielded roadway, chambers below reactor floor, railroad door, sumps, shielding. Section C shows basement sumps and chambers below reactor floor. Kaiser engineers 6413-11-STEP/LOFT-650-A-5. Date: October 1964. INEEL index code no. 036-650-00-486-122217 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  7. LOFT. Containment and service building (TAN650). South elevation, details, section. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LOFT. Containment and service building (TAN-650). South elevation, details, section. Shows part of duct enclosure, railroad door opening, roof ventilators, shielded personnel entrance, and change room. Section F shows view from west looking toward shielding around airlock door on main floor. Kaiser engineers 6413-11-STEP/LOFT-650-A-9. Date: October 1964. INEEL index code no. 036-650-00-486-122221 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  8. Simulation of the hohlraum for a laser facility of Megajoule scale

    NASA Astrophysics Data System (ADS)

    Chizhkov, M. N.; Kozmanov, M. Y. U.; Lebedev, S. N.; Lykov, V. A.; Rykovanova, V. V.; Seleznev, V. N.; Selezneva, K. I.; Stryakhnina, O. V.; Shestakov, A. A.; Vronskiy, A. V.

    2010-08-01

    2D calculations of the promising laser hohlraums were performed with using of the Sinara computer code. These hohlraums are intended for achievement of indirectly-driven thermonuclear ignition at laser energy above 1 MJ. Two calculation variants of the laser assembly with the form close to a rugby ball were carried out: with laser entrance hole shields and without shields. Time dependent hohlraum radiation temperature and x-ray flux asymmetry on a target were obtained.

  9. Effect of non-equilibrium flow chemistry and surface catalysis on surface heating to AFE

    NASA Technical Reports Server (NTRS)

    Stewart, David A.; Henline, William D.; Chen, Yih-Kanq

    1991-01-01

    The effect of nonequilibrium flow chemistry on the surface temperature distribution over the forebody heat shield on the Aeroassisted Flight Experiment (AFE) vehicle was investigated using a reacting boundary-layer code. Computations were performed by using boundary-layer-edge properties determined from global iterations between the boundary-layer code and flow field solutions from a viscous shock layer (VSL) and a full Navier-Stokes solution. Surface temperature distribution over the AFE heat shield was calculated for two flight conditions during a nominal AFE trajectory. This study indicates that the surface temperature distribution is sensitive to the nonequilibrium chemistry in the shock layer. Heating distributions over the AFE forebody calculated using nonequilibrium edge properties were similar to values calculated using the VSL program.

  10. Combined experimental and Monte Carlo verification of brachytherapy plans for vaginal applicators

    NASA Astrophysics Data System (ADS)

    Sloboda, Ron S.; Wang, Ruqing

    1998-12-01

    Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.

  11. A Generic 1D Forward Modeling and Inversion Algorithm for TEM Sounding with an Arbitrary Horizontal Loop

    NASA Astrophysics Data System (ADS)

    Li, Zhanhui; Huang, Qinghua; Xie, Xingbing; Tang, Xingong; Chang, Liao

    2016-08-01

    We present a generic 1D forward modeling and inversion algorithm for transient electromagnetic (TEM) data with an arbitrary horizontal transmitting loop and receivers at any depth in a layered earth. Both the Hankel and sine transforms required in the forward algorithm are calculated using the filter method. The adjoint-equation method is used to derive the formulation of data sensitivity at any depth in non-permeable media. The inversion algorithm based on this forward modeling algorithm and sensitivity formulation is developed using the Gauss-Newton iteration method combined with the Tikhonov regularization. We propose a new data-weighting method to minimize the initial model dependence that enhances the convergence stability. On a laptop with a CPU of i7-5700HQ@3.5 GHz, the inversion iteration of a 200 layered input model with a single receiver takes only 0.34 s, while it increases to only 0.53 s for the data from four receivers at a same depth. For the case of four receivers at different depths, the inversion iteration runtime increases to 1.3 s. Modeling the data with an irregular loop and an equal-area square loop indicates that the effect of the loop geometry is significant at early times and vanishes gradually along the diffusion of TEM field. For a stratified earth, inversion of data from more than one receiver is useful in noise reducing to get a more credible layered earth. However, for a resistive layer shielded below a conductive layer, increasing the number of receivers on the ground does not have significant improvement in recovering the resistive layer. Even with a down-hole TEM sounding, the shielded resistive layer cannot be recovered if all receivers are above the shielded resistive layer. However, our modeling demonstrates remarkable improvement in detecting the resistive layer with receivers in or under this layer.

  12. Analytical theory of coherent synchrotron radiation wakefield of short bunches shielded by conducting parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stupakov, Gennady; Zhou, Demin

    2016-04-21

    We develop a general model of coherent synchrotron radiation (CSR) impedance with shielding provided by two parallel conducting plates. This model allows us to easily reproduce all previously known analytical CSR wakes and to expand the analysis to situations not explored before. It reduces calculations of the impedance to taking integrals along the trajectory of the beam. New analytical results are derived for the radiation impedance with shielding for the following orbits: a kink, a bending magnet, a wiggler of finite length, and an infinitely long wiggler. Furthermore, all our formulas are benchmarked against numerical simulations with the CSRZ computermore » code.« less

  13. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  14. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  15. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  16. Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.

    PubMed

    Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T

    2015-01-01

    Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.

  17. Projectile Shape Effects Analysis for Space Debris Impact

    NASA Astrophysics Data System (ADS)

    Shiraki, Kuniaki; Yamamoto, Tetsuya; Kamiya, Takeshi

    2002-01-01

    (JEM IST), has a manned pressurized module used as a research laboratory on orbit and planned to be attached to the International Space Station (ISS). Protection system from Micrometeoroids and orbital debris (MM/OD) is very important for crew safety aboard the ISS. We have to design a module with shields attached to the outside of the pressurized wall so that JEM can be protected when debris of diameter less than 20mm impact on the JEM wall. In this case, the ISS design requirement for space debris protection system is specified as the Probability of No Penetration (PNP). The PNP allocation for the JEM is 0.9738 for ten years, which is reallocated as 0.9814 for the Pressurized Module (PM) and 0.9922 for the Experiment Logistics Module-Pressurized Section (ELM-PS). The PNP is calculated with Bumper code provided by NASA with the following data inputs to the calculation. (1) JEM structural model (2) Ballistic Limit Curve (BLC) of shields pressure wall (3) Environmental conditions: Analysis type, debris distribution, debris model, debris density, Solar single aluminum plate bumper (1.27mm thickness). The other is a Stuffed Whipple shield with its second bumper composed of an aluminum mesh, three layers of Nextel AF62 ceramic fabric, and four layers of Kevlar 710 fabric with thermal isolation material Multilayer Insulation (MLI) in the bottom. The second bumper of Stuffed Whipple shields is located at the middle between the first bumper and the 4.8 mm-thick pressurized wall. with Two-Stage Light Gas Gun (TSLGG) tests and hydro code simulation already. The remaining subject is the verification of JEM debris protection shields for velocities ranging from 7 to 15 km/sec. We conducted Conical Shaped Charge (CSC) tests that enable hypervelocity impact tests for the debris velocity range above 10 km/sec as well as hydro code simulation. because of the jet generation mechanism. It is therefore necessary to analyze and compensate the results for a solid aluminum sphere, which is the design requirement.

  18. Design optimization using adjoint of Long-time LES for the trailing edge of a transonic turbine vane

    NASA Astrophysics Data System (ADS)

    Talnikar, Chaitanya; Wang, Qiqi

    2017-11-01

    Adjoint-based design optimization methods have been applied to low-fidelity simulation methods like Reynolds Averaged Navier-Stokes (RANS) and are useful for designing fluid machinery components. But to reliably capture the complex flow phenomena involved in turbomachinery, high fidelity simulations like large eddy simulation (LES) are required. Unfortunately due to the chaotic dynamics of turbulence, the unsteady adjoint method for LES diverges and produces incorrect gradients. Using a viscosity stabilized unsteady adjoint method developed for LES, the gradient can be obtained with reasonable accuracy. In this paper, design of the trailing edge of a gas turbine inlet guide vane is performed with the objective to reduce stagnation pressure loss and heat transfer over the surface of the vane. Slight changes in the shape of trailing edge can significantly impact these quantities by altering the boundary layer development process and separation points. The trailing edge is parameterized using a linear combination of 5 convex designs. Bayesian optimization is used as a global optimizer with the objective function evaluated from the LES and gradients obtained using the viscosity adjoint method. Results from the optimization, performed on the supercomputer Mira, are presented.

  19. Contribution of High Charge and Energy (HZE) Ions During Solar-Particle Event of September 29, 1989

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.; Simonsen, Lisa C.; Atwell, William; Badavi, Francis F.; Miller, Jack

    1999-01-01

    The solar-particle event (SPE) of September 29, 1989, produced an iron-rich spectrum with energies approaching 1 A GeV with an approximate spectral slope parameter of 2.5. These high charge and energy (HZE) ions challenge conventional methods of shield design and assessment of astronaut risks. In the past, shield design and risk assessment have relied on proton shielding codes and biological response models derived from X-ray and neutron exposure data. Because the HZE spectra decline rapidly with energy and HZE attenuation in materials is limited by their penetration power, details of the mass distributions about the sensitive tissues (shielding materials and the astronaut's body) are important determining factors of the exposure levels and distributions of linear energy transfer. Local tissue environments during the SPE of September 29, 1989, with its f= components are examined to analyze the importance of these ions to human SPE exposure. Typical space suit and lightly shielded structures leave significant contributions from HZE components to certain critical body tissues and have important implications on the models for risk assessment. A heavily shielded equipment room of a space vehicle or habitat requires knowledge of the breakup of these ions into lighter components, including neutrons, for shield design specifications.

  20. Contaminant deposition building shielding factors for US residential structures.

    PubMed

    Dickson, Elijah; Hamby, David; Eckerman, Keith

    2017-10-10

    This paper presents validated building shielding factors designed for contemporary US housing-stock under an idealized, yet realistic, exposure scenario from contaminant deposition on the roof and surrounding surfaces. The building shielding factors are intended for use in emergency planning and level three probabilistic risk assessments for a variety of postulated radiological events in which a realistic assessment is necessary to better understand the potential risks for accident mitigation and emergency response planning. Factors are calculated from detailed computational housing-units models using the general-purpose Monte Carlo N-Particle computational code, MCNP5, and are benchmarked from a series of narrow- and broad-beam measurements analyzing the shielding effectiveness of ten common general-purpose construction materials and ten shielding models representing the primary weather barriers (walls and roofs) of likely US housing-stock. Each model was designed to scale based on common residential construction practices and include, to the extent practical, all structurally significant components important for shielding against ionizing radiation. Calculations were performed for floor-specific locations from contaminant deposition on the roof and surrounding ground as well as for computing a weighted-average representative building shielding factor for single- and multi-story detached homes, both with and without basement as well for single-wide manufactured housing-unit. © 2017 IOP Publishing Ltd.

  1. Contaminant deposition building shielding factors for US residential structures.

    PubMed

    Dickson, E D; Hamby, D M; Eckerman, K F

    2015-06-01

    This paper presents validated building shielding factors designed for contemporary US housing-stock under an idealized, yet realistic, exposure scenario from contaminant deposition on the roof and surrounding surfaces. The building shielding factors are intended for use in emergency planning and level three probabilistic risk assessments for a variety of postulated radiological events in which a realistic assessment is necessary to better understand the potential risks for accident mitigation and emergency response planning. Factors are calculated from detailed computational housing-units models using the general-purpose Monte Carlo N-Particle computational code, MCNP5, and are benchmarked from a series of narrow- and broad-beam measurements analyzing the shielding effectiveness of ten common general-purpose construction materials and ten shielding models representing the primary weather barriers (walls and roofs) of likely US housing-stock. Each model was designed to scale based on common residential construction practices and include, to the extent practical, all structurally significant components important for shielding against ionizing radiation. Calculations were performed for floor-specific locations from contaminant deposition on the roof and surrounding ground as well as for computing a weighted-average representative building shielding factor for single- and multi-story detached homes, both with and without basement as well for single-wide manufactured housing-unit.

  2. Shielding and activation calculations around the reactor core for the MYRRHA ADS design

    NASA Astrophysics Data System (ADS)

    Ferrari, Anna; Mueller, Stefan; Konheiser, J.; Castelliti, D.; Sarotto, M.; Stankovskiy, A.

    2017-09-01

    In the frame of the FP7 European project MAXSIMA, an extensive simulation study has been done to assess the main shielding problems in view of the construction of the MYRRHA accelerator-driven system at SCK·CEN in Mol (Belgium). An innovative method based on the combined use of the two state-of-the-art Monte Carlo codes MCNPX and FLUKA has been used, with the goal to characterize complex, realistic neutron fields around the core barrel, to be used as source terms in detailed analyses of the radiation fields due to the system in operation, and of the coupled residual radiation. The main results of the shielding analysis are presented, as well as the construction of an activation database of all the key structural materials. The results evidenced a powerful way to analyse the shielding and activation problems, with direct and clear implications on the design solutions.

  3. Assessment of the MPACT Resonance Data Generation Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Williams, Mark L.

    Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less

  4. A space radiation shielding model of the Martian radiation environment experiment (MARIE)

    NASA Technical Reports Server (NTRS)

    Atwell, W.; Saganti, P.; Cucinotta, F. A.; Zeitlin, C. J.

    2004-01-01

    The 2001 Mars Odyssey spacecraft was launched towards Mars on April 7, 2001. Onboard the spacecraft is the Martian radiation environment experiment (MARIE), which is designed to measure the background radiation environment due to galactic cosmic rays (GCR) and solar protons in the 20-500 MeV/n energy range. We present an approach for developing a space radiation-shielding model of the spacecraft that includes the MARIE instrument in the current mapping phase orientation. A discussion is presented describing the development and methodology used to construct the shielding model. For a given GCR model environment, using the current MARIE shielding model and the high-energy particle transport codes, dose rate values are compared with MARIE measurements during the early mapping phase in Mars orbit. The results show good agreement between the model calculations and the MARIE measurements as presented for the March 2002 dataset. c2003 COSPAR. Published by Elsevier Ltd. All rights reserved.

  5. A space radiation shielding model of the Martian radiationenvironment experiment (MARIE)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwell, William; Saganti, Premkumar; Cucinotta, Francis A.

    2004-12-01

    The 2001 Mars Odyssey spacecraft was launched towards Mars on April 7, 2001. On board the spacecraft is the Martian radiation environment experiment (MARIE), which is designed to measure the background radiation environment due to galactic cosmic rays (GCR) and solar protons in the 20 500 MeV/n energy range. We present an approach for developing a space radiation-shielding model of the spacecraft that includes the MARIE instrument in the current mapping phase orientation. A discussion is presented describing the development and methodology used to construct the shielding model. For a given GCR model environment, using the current MARIE shielding modelmore » and the high-energy particle transport codes, dose rate values are compared with MARIE measurements during the early mapping phase in Mars orbit. The results show good agreement between the model calculations and the MARIE measurements as presented for the March 2002 dataset.« less

  6. International Space Station (ISS) Meteoroid/Orbital Debris Shielding

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.

    1999-01-01

    Design practices to provide protection for International Space Station (ISS) crew and critical equipment from meteoroid and orbital debris (M/OD) Impacts have been developed. Damage modes and failure criteria are defined for each spacecraft system. Hypervolocity Impact -1 - and analyses are used to develop ballistic limit equations (BLEs) for each exposed spacecraft system. BLEs define Impact particle sizes that result in threshold failure of a particular spacecraft system as a function of Impact velocity, angles and particle density. The BUMPER computer code Is used to determine the probability of no penetration (PNP) that falls the spacecraft shielding based on NASA standard meteoroid/debris models, a spacecraft geometry model, and the BLEs. BUMPER results are used to verify spacecraft shielding requirements Low-weight, high-performance shielding alternatives have been developed at the NASA Johnson Space Center (JSC) Hypervelocity Impact Technology Facility (HITF) to meet spacecraft protection requirements.

  7. Reusable shielding material for neutron- and gamma-radiation

    NASA Astrophysics Data System (ADS)

    Calzada, Elbio; Grünauer, Florian; Schillinger, Burkhard; Türck, Harald

    2011-09-01

    At neutron research facilities all around the world radiation shieldings are applied to reduce the background of neutron and gamma radiation as far as possible in order to perform high quality measurements and to fulfill the radiation protection requirements. The current approach with cement-based compounds has a number of shortcomings: "Heavy concrete" contains a high amount of elements, which are not desired to obtain a high attenuation of neutron and/or gamma radiation (e.g. calcium, carbon, oxygen, silicon and aluminum). A shielding material with a high density of desired nuclei such as iron, hydrogen and boron was developed for the redesign of the neutron radiography facility ANTARES at beam tube 4 (located at a cold neutron source) of FRM-II. The composition of the material was optimized by help of the Monte Carlo code MCNP5. With this shielding material a considerable higher attenuation of background radiation can be obtained compared to usual heavy concretes.

  8. Gas bremsstrahlung shielding calculation for first optic enclosure of ILSF medical beamline

    NASA Astrophysics Data System (ADS)

    Beigzadeh Jalali, H.; Salimi, E.; Rahighi, J.

    2016-10-01

    Gas bremsstrahlung is generated in high energy electron storage ring accompanies the synchrotron radiation into the beamlines and strike the various components of the beamline. In this paper, radiation shielding calculation for secondary gas bremsstrahlung is performed for the first optics enclosure (FOE) of medical beamline of the Iranian Light Source Facility (ILSF). Dose equivalent rate (DER) calculation is accomplished using FLUKA Monte Carlo code. A comprehensive study of DER distribution at the back wall, sides and roof is given.

  9. LPT. Plot plan and site layout. Includes shield test pool/EBOR ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    LPT. Plot plan and site layout. Includes shield test pool/EBOR facility. (TAN-645 and -646) low power test building (TAN-640 and -641), water storage tanks, guard house (TAN-642), pump house (TAN-644), driveways, well, chlorination building (TAN-643), septic system. Ralph M. Parsons 1229-12 ANP/GE-7-102. November 1956. Approved by INEEL Classification Office for public release. INEEL index code no. 038-0102-00-693-107261 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  10. A&M. TAN607. Shield wall sections and details around hot shop ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A&M. TAN-607. Shield wall sections and details around hot shop and special equipment room, showing taper, crane rail elevations, and elevation for biparting door (door no. 301) in wall between hot shop and special equipment room. Ralph M. Parsons 902-3-ANP-607-S 138. Date: December 1952. Approved by INEEL Classification Office for public release. INEEL index code no. 034-0607-62-963-106782 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  11. Full moment tensor and source location inversion based on full waveform adjoint method

    NASA Astrophysics Data System (ADS)

    Morency, C.

    2012-12-01

    The development of high-performance computing and numerical techniques enabled global and regional tomography to reach high levels of precision, and seismic adjoint tomography has become a state-of-the-art tomographic technique. The method was successfully used for crustal tomography of Southern California (Tape et al., 2009) and Europe (Zhu et al., 2012). Here, I will focus on the determination of source parameters (full moment tensor and location) based on the same approach (Kim et al, 2011). The method relies on full wave simulations and takes advantage of the misfit between observed and synthetic seismograms. An adjoint wavefield is calculated by back-propagating the difference between observed and synthetics from the receivers to the source. The interaction between this adjoint wavefield and the regular forward wavefield helps define Frechet derivatives of the source parameters, that is, the sensitivity of the misfit with respect to the source parameters. Source parameters are then recovered by minimizing the misfit based on a conjugate gradient algorithm using the Frechet derivatives. First, I will demonstrate the method on synthetic cases before tackling events recorded at the Geysers. The velocity model used at the Geysers is based on the USGS 3D velocity model. Waveform datasets come from the Northern California Earthquake Data Center. Finally, I will discuss strategies to ultimately use this method to characterize smaller events for microseismic and induced seismicity monitoring. References: - Tape, C., Q. Liu, A. Maggi, and J. Tromp, 2009, Adjoint tomography of the Southern California crust: Science, 325, 988992. - Zhu, H., Bozdag, E., Peter, D., and Tromp, J., 2012, Structure of the European upper mantle revealed by adjoint method: Nature Geoscience, 5, 493-498. - Kim, Y., Q. Liu, and J. Tromp, 2011, Adjoint centroid-moment tensor inversions: Geophys. J. Int., 186, 264278. Prepared by LLNL under Contract DE-AC52-07NA27344.

  12. On the Direct Assimilation of Along-track Sea Surface Height Observations into a Free-surface Ocean Model Using a Weak Constraints Four Dimensional Variational (4dvar) Method

    NASA Astrophysics Data System (ADS)

    Ngodock, H.; Carrier, M.; Smith, S. R.; Souopgui, I.; Martin, P.; Jacobs, G. A.

    2016-02-01

    The representer method is adopted for solving a weak constraints 4dvar problem for the assimilation of ocean observations including along-track SSH, using a free surface ocean model. Direct 4dvar assimilation of SSH observations along the satellite tracks requires that the adjoint model be integrated with Dirac impulses on the right hand side of the adjoint equations for the surface elevation equation. The solution of this adjoint model will inevitably include surface gravity waves, and it constitutes the forcing for the tangent linear model (TLM) according to the representer method. This yields an analysis that is contaminated by gravity waves. A method for avoiding the generation of the surface gravity waves in the analysis is proposed in this study; it consists of removing the adjoint of the free surface from the right hand side (rhs) of the free surface mode in the TLM. The information from the SSH observations will still propagate to all other variables via the adjoint of the balance relationship between the barotropic and baroclinic modes, resulting in the correction to the surface elevation. Two assimilation experiments are carried out in the Gulf of Mexico: one with adjoint forcing included on the rhs of the TLM free surface equation, and the other without. Both analyses are evaluated against the assimilated SSH observations, SSH maps from Aviso and independent surface drifters, showing that the analysis that did not include adjoint forcing in the free surface is more accurate. This study shows that when a weak constraint 4dvar approach is considered for the assimilation of along-track SSH observations using a free surface model, with the aim of correcting the mesoscale circulation, an independent model error should not be assigned to the free surface.

  13. Covering Resilience: A Recent Development for Binomial Checkpointing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulationsmore » and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.« less

  14. An Adjoint-Based Approach to Study a Flexible Flapping Wing in Pitching-Rolling Motion

    NASA Astrophysics Data System (ADS)

    Jia, Kun; Wei, Mingjun; Xu, Min; Li, Chengyu; Dong, Haibo

    2017-11-01

    Flapping-wing aerodynamics, with advantages in agility, efficiency, and hovering capability, has been the choice of many flyers in nature. However, the study of bio-inspired flapping-wing propulsion is often hindered by the problem's large control space with different wing kinematics and deformation. The adjoint-based approach reduces largely the computational cost to a feasible level by solving an inverse problem. Facing the complication from moving boundaries, non-cylindrical calculus provides an easy extension of traditional adjoint-based approach to handle the optimization involving moving boundaries. The improved adjoint method with non-cylindrical calculus for boundary treatment is first applied on a rigid pitching-rolling plate, then extended to a flexible one with active deformation to further increase its propulsion efficiency. The comparison of flow dynamics with the initial and optimal kinematics and deformation provides a unique opportunity to understand the flapping-wing mechanism. Supported by AFOSR and ARL.

  15. Learning a trajectory using adjoint functions and teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad B.; Barhen, Jacob

    1992-01-01

    A new methodology for faster supervised temporal learning in nonlinear neural networks is presented which builds upon the concept of adjoint operators to allow fast computation of the gradients of an error functional with respect to all parameters of the neural architecture, and exploits the concept of teacher forcing to incorporate information on the desired output into the activation dynamics. The importance of the initial or final time conditions for the adjoint equations is discussed. A new algorithm is presented in which the adjoint equations are solved simultaneously (i.e., forward in time) with the activation dynamics of the neural network. We also indicate how teacher forcing can be modulated in time as learning proceeds. The results obtained show that the learning time is reduced by one to two orders of magnitude with respect to previously published results, while trajectory tracking is significantly improved. The proposed methodology makes hardware implementation of temporal learning attractive for real-time applications.

  16. Photon mass attenuation coefficients of a silicon resin loaded with WO3, PbO, and Bi2O3 Micro and Nano-particles for radiation shielding

    NASA Astrophysics Data System (ADS)

    Verdipoor, Khatibeh; Alemi, Abdolali; Mesbahi, Asghar

    2018-06-01

    Novel shielding materials for photons based on silicon resin and WO3, PbO, and Bi2O3 Micro and Nano-particles were designed and their mass attenuation coefficients were calculated using Monte Carlo (MC) method. Using lattice cards in MCNPX code, micro and nanoparticles with sizes of 100 nm and 1 μm was designed inside a silicon resin matrix. Narrow beam geometry was simulated to calculate the attenuation coefficients of samples against mono-energetic beams of Co60 (1.17 and 1.33 MeV), Cs137 (663.8 KeV), and Ba133 (355.9 KeV). The shielding samples made of nanoparticles had higher mass attenuation coefficients, up to 17% relative to those made of microparticles. The superiority of nano-shields relative to micro-shields was dependent on the filler concentration and the energy of photons. PbO, and Bi2O3 nanoparticles showed higher attenuation compared to WO3 nanoparticles in studied energies. Fabrication of novel shielding materials using PbO, and Bi2O3 nanoparticles is recommended for application in radiation protection against photon beams.

  17. [Dosimetric evaluation of eye lense shieldings in computed tomography examination--measurements and Monte Carlo simulations].

    PubMed

    Wulff, Jorg; Keil, Boris; Auvanis, Diyala; Heverhagen, Johannes T; Klose, Klaus Jochen; Zink, Klemens

    2008-01-01

    The present study aims at the investigation of eye lens shielding of different composition for the use in computed tomography examinations. Measurements with thermo-luminescent dosimeters and a simple cylindrical waterfilled phantom were performed as well as Monte Carlo simulations with an equivalent geometry. Besides conventional shielding made of Bismuth coated latex, a new shielding with a mixture of metallic components was analyzed. This new material leads to an increased dose reduction compared to the Bismuth shielding. Measured and Monte Carlo simulated dose reductions are in good agreement and amount to 34% for the Bismuth shielding and 46% for the new material. For simulations the EGSnrc code system was used and a new application CTDOSPP was developed for the simulation of the computed tomography examination. The investigations show that a satisfying agreement between simulation and measurement with the chosen geometries of this study could only be achieved, when transport of secondary electrons was accounted for in the simulation. The amount of scattered radiation due to the protector by fluorescent photons was analyzed and is larger for the new material due to the smaller atomic number of the metallic components.

  18. Hypervelocity Impact Performance of Open Cell Foam Core Sandwich Panel Structures

    NASA Technical Reports Server (NTRS)

    Ryan, S.; Ordonez, E.; Christiansen, E. L.; Lear, D. M.

    2010-01-01

    Open cell metallic foam core sandwich panel structures are of interest for application in spacecraft micrometeoroid and orbital debris shields due to their novel form and advantageous structural and thermal performance. Repeated shocking as a result of secondary impacts upon individual foam ligaments during the penetration process acts to raise the thermal state of impacting projectiles ; resulting in fragmentation, melting, and vaporization at lower velocities than with traditional shielding configurations (e.g. Whipple shield). In order to characterize the protective capability of these structures, an extensive experimental campaign was performed by the Johnson Space Center Hypervelocity Impact Technology Facility, the results of which are reported in this paper. Although not capable of competing against the protection levels achievable with leading heavy shields in use on modern high-risk vehicles (i.e. International Space Station modules), metallic foam core sandwich panels are shown to provide a substantial improvement over comparable structural panels and traditional low weight shielding alternatives such as honeycomb sandwich panels and metallic Whipple shields. A ballistic limit equation, generalized in terms of panel geometry, is derived and presented in a form suitable for application in risk assessment codes.

  19. Measurement And Calculation of High-Energy Neutron Spectra Behind Shielding at the CERF 120-GeV/C Hadron Beam Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakao, N.; /SLAC; Taniguchi, S.

    Neutron energy spectra were measured behind the lateral shield of the CERF (CERN-EU High Energy Reference Field) facility at CERN with a 120 GeV/c positive hadron beam (a mixture of mainly protons and pions) on a cylindrical copper target (7-cm diameter by 50-cm long). An NE213 organic liquid scintillator (12.7-cm diameter by 12.7-cm long) was located at various longitudinal positions behind shields of 80- and 160-cm thick concrete and 40-cm thick iron. The measurement locations cover an angular range with respect to the beam axis between 13 and 133{sup o}. Neutron energy spectra in the energy range between 32 MeVmore » and 380 MeV were obtained by unfolding the measured pulse height spectra with the detector response functions which have been verified in the neutron energy range up to 380 MeV in separate experiments. Since the source term and experimental geometry in this experiment are well characterized and simple and results are given in the form of energy spectra, these experimental results are very useful as benchmark data to check the accuracies of simulation codes and nuclear data. Monte Carlo simulations of the experimental set up were performed with the FLUKA, MARS and PHITS codes. Simulated spectra for the 80-cm thick concrete often agree within the experimental uncertainties. On the other hand, for the 160-cm thick concrete and iron shield differences are generally larger than the experimental uncertainties, yet within a factor of 2. Based on source term simulations, observed discrepancies among simulations of spectra outside the shield can be partially explained by differences in the high-energy hadron production in the copper target.« less

  20. SU-E-T-569: Neutron Shielding Calculation Using Analytical and Multi-Monte Carlo Method for Proton Therapy Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, S; Shin, E H; Kim, J

    2015-06-15

    Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using themore » production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.« less

  1. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  2. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  3. A simple code for use in shielding and radiation dosage analyses

    NASA Technical Reports Server (NTRS)

    Wan, C. C.

    1972-01-01

    A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.

  4. X-Ray Computed Tomography Inspection of the Stardust Heat Shield

    NASA Technical Reports Server (NTRS)

    McNamara, Karen M.; Schneberk, Daniel J.; Empey, Daniel M.; Koshti, Ajay; Pugel, D. Elizabeth; Cozmuta, Ioana; Stackpoole, Mairead; Ruffino, Norman P.; Pompa, Eddie C.; Oliveras, Ovidio; hide

    2010-01-01

    The "Stardust" heat shield, composed of a PICA (Phenolic Impregnated Carbon Ablator) Thermal Protection System (TPS), bonded to a composite aeroshell, contains important features which chronicle its time in space as well as re-entry. To guide the further study of the Stardust heat shield, NASA reviewed a number of techniques for inspection of the article. The goals of the inspection were: 1) to establish the material characteristics of the shield and shield components, 2) record the dimensions of shield components and assembly as compared with the pre-flight condition, 3) provide flight infonnation for validation and verification of the FIAT ablation code and PICA material property model and 4) through the evaluation of the shield material provide input to future missions which employ similar materials. Industrial X-Ray Computed Tomography (CT) is a 3D inspection technology which can provide infonnation on material integrity, material properties (density) and dimensional measurements of the heat shield components. Computed tomographic volumetric inspections can generate a dimensionally correct, quantitatively accurate volume of the shield assembly. Because of the capabilities offered by X-ray CT, NASA chose to use this method to evaluate the Stardust heat shield. Personnel at NASA Johnson Space Center (JSC) and Lawrence Livermore National Labs (LLNL) recently performed a full scan of the Stardust heat shield using a newly installed X-ray CT system at JSC. This paper briefly discusses the technology used and then presents the following results: 1. CT scans derived dimensions and their comparisons with as-built dimensions anchored with data obtained from samples cut from the heat shield; 2. Measured density variation, char layer thickness, recession and bond line (the adhesive layer between the PICA and the aeroshell) integrity; 3. FIAT predicted recession, density and char layer profiles as well as bondline temperatures Finally suggestions are made as to future uses of this technology as a tool for non-destructively inspecting and verifying both pre and post flight heat shields.

  5. Neutron Energy and Time-of-flight Spectra Behind the Lateral Shield of a High Energy Electron Accelerator Beam Dump, Part II: Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roesler, Stefan

    2002-09-19

    Energy spectra of high-energy neutrons and neutron time-of-flight spectra were calculated for the setup of experiment T-454 performed with a NE213 liquid scintillator at the Final Focus Test Beam (FFTB) facility at the Stanford Linear Accelerator Center. The neutrons were created by the interaction a 28.7 GeV electron beam in the aluminum beam dump of the FFTB which is housed inside a thick steel and concrete shielding. In order to determine the attenuation length of high-energy neutrons additional concrete shielding of various thicknesses was placed outside the existing shielding. The calculations were performed using the FLUKA interaction and transport code.more » The energy and time-of-flight were recorded for the location of the detector allowing a detailed comparison with the experimental data. A generally good description of the data is achieved adding confidence to the use of FLUKA for the design of shielding for high-energy electron accelerators.« less

  6. Optimization of radiation shielding material aiming at compactness, lightweight, and low activation for a vehicle-mounted accelerator-driven D-T neutron source.

    PubMed

    Cai, Yao; Hu, Huasi; Lu, Shuangying; Jia, Qinggang

    2018-05-01

    To minimize the size and weight of a vehicle-mounted accelerator-driven D-T neutron source and protect workers from unnecessary irradiation after the equipment shutdown, a method to optimize radiation shielding material aiming at compactness, lightweight, and low activation for the fast neutrons was developed. The method employed genetic algorithm, combining MCNP and ORIGEN codes. A series of composite shielding material samples were obtained by the method step by step. The volume and weight needed to build a shield (assumed as a coaxial tapered cylinder) were adopted to compare the performance of the materials visually and conveniently. The results showed that the optimized materials have excellent performance in comparison with the conventional materials. The "MCNP6-ACT" method and the "rigorous two steps" (R2S) method were used to verify the activation grade of the shield irradiated by D-T neutrons. The types of radionuclide, the energy spectrum of corresponding decay gamma source, and the variation in decay gamma dose rate were also computed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose

    NASA Technical Reports Server (NTRS)

    Welton, Andrew; Lee, Kerry

    2010-01-01

    While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.

  8. Concepts and strategies for lunar base radiation protection - Prefabricated versus in-situ materials

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.; Townsend, Lawrence W.

    1992-01-01

    The most recently accepted environment data are used as inputs for the Langley nucleon and heavy-ion transport codes, BRYNTRN and HZETRN, to examine the shield effectiveness of lunar regolith in comparison with commercially-used shield materials in nuclear facilities. Several of the fabricated materials categorized as neutron absorbers exhibit favorable characteristics for space radiation protection. In particular, polyethylene with additive boron is analyzed with regard to response to the predicted lunar galactic cosmic ray and solar proton flare environment during the course of a complete solar cycle. Although this effort is not intended to be a definitive trade study for specific shielding recommendations, attention is given to several factors that warrant consideration in such trade studies. For example, the transporting of bulk shield material to the lunar site as opposed to regolith-moving and processing equipment is assessed on the basis of recent scenario studies. The transporting of shield material from Earth may also be a viable alternative to the use of regolith from standpoints of cost-effectiveness, EVA time required, and risk factor.

  9. Depletion optimization of lumped burnable poisons in pressurized water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodah, Z.H.

    1982-01-01

    Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solidmore » cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration.« less

  10. FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport

    NASA Astrophysics Data System (ADS)

    Munk, Madicken

    The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.

  11. A Reduced-Order Successive Linear Estimator for Geostatistical Inversion and its Application in Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng

    2018-03-01

    Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.

  12. JAE: A Jupiter Atmospheric Entry Probe Heating Code

    NASA Technical Reports Server (NTRS)

    Wercinski, Paul F.; Tauber, Michael E.; Yang, Lily

    1997-01-01

    The strong gravitational attraction of Jupiter on probes approaching the planet results in very high atmospheric entry velocities. The values relative to the rotating atmosphere can vary from about 47 to 60 km/sec, depending on the latitude of the entry. Therefore, the peak heating rates and heat shield mass fractions exceed those for any other atmospheric entries. For example, the Galileo probe's heat shield mass fraction was 50%, of which 45% was devoted to the forebody. Although the Galileo probe's mission was very successful, many more scientific questions about the Jovian atmosphere remain to be answered and additional probe missions are being planned. Recent developments in microelectronics have raised the possibility of building smaller and less expensive probes than Galileo. Therefore, it was desirable to develop a code that could quickly compute the forebody entry heating environments when performing parametric probe sizing studies. The Jupiter Atmospheric Entry (JAE) code was developed to meet this requirement. The body geometry consists of a blunt-nosed conical shape of arbitrary nose and base radius and cone angles up to about 65 deg at zero angle of attack.

  13. Analysis of space radiation exposure levels at different shielding configurations by ray-tracing dose estimation method

    NASA Astrophysics Data System (ADS)

    Kartashov, Dmitry; Shurshakov, Vyacheslav

    2018-03-01

    A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.

  14. Lunar Surface Reactor Shielding Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Shawn; McAlpine, William; Lipinski, Ronald

    A nuclear reactor system could provide power to support long term human exploration of the moon. Such a system would require shielding to protect astronauts from its emitted radiations. Shielding studies have been performed for a Gas Cooled Reactor system because it is considered to be the most suitable nuclear reactor system available for lunar exploration, based on its tolerance of oxidizing lunar regolith and its good conversion efficiency. The goals of the shielding studies were to determine a material shielding configuration that reduces the dose (rem) to the required level in order to protect astronauts, and to estimate themore » mass of regolith that would provide an equivalent protective effect if it were used as the shielding material. All calculations were performed using MCNPX, a Monte Carlo transport code. Lithium hydride must be kept between 600 K and 700 K to prevent excessive swelling from large amounts of gamma or neutron irradiation. The issue is that radiation damage causes separation of the lithium and the hydrogen, resulting in lithium metal and hydrogen gas. The proposed design uses a layer of B4C to reduce the combined neutron and gamma dose to below 0.5Grads before the LiH is introduced. Below 0.5Grads the swelling in LiH is small (less than about 1%) for all temperatures. This approach causes the shield to be heavier than if the B4C were replaced by LiH, but it makes the shield much more robust and reliable.« less

  15. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  16. Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"

    EPA Science Inventory

    Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...

  17. Extending the Binomial Checkpointing Technique for Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walther, Andrea; Narayanan, Sri Hari Krishna

    In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massivemore » parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.« less

  18. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  19. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less

  20. Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1999-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  1. Neural network training by integration of adjoint systems of equations forward in time

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)

    1992-01-01

    A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.

  2. Classical gluon and graviton radiation from the bi-adjoint scalar double copy

    NASA Astrophysics Data System (ADS)

    Goldberger, Walter D.; Prabhu, Siddharth G.; Thompson, Jedidiah O.

    2017-09-01

    We find double-copy relations between classical radiating solutions in Yang-Mills theory coupled to dynamical color charges and their counterparts in a cubic bi-adjoint scalar field theory which interacts linearly with particles carrying bi-adjoint charge. The particular color-to-kinematics replacements we employ are motivated by the Bern-Carrasco-Johansson double-copy correspondence for on-shell amplitudes in gauge and gravity theories. They are identical to those recently used to establish relations between classical radiating solutions in gauge theory and in dilaton gravity. Our explicit bi-adjoint solutions are constructed to second order in a perturbative expansion, and map under the double copy onto gauge theory solutions which involve at most cubic gluon self-interactions. If the correspondence is found to persist to higher orders in perturbation theory, our results suggest the possibility of calculating gravitational radiation from colliding compact objects, directly from a scalar field with vastly simpler (purely cubic) Feynman vertices.

  3. Reduction by symmetries in singular quantum-mechanical problems: General scheme and application to Aharonov-Bohm model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smirnov, A. G., E-mail: smirnov@lpi.ru

    2015-12-15

    We develop a general technique for finding self-adjoint extensions of a symmetric operator that respects a given set of its symmetries. Problems of this type naturally arise when considering two- and three-dimensional Schrödinger operators with singular potentials. The approach is based on constructing a unitary transformation diagonalizing the symmetries and reducing the initial operator to the direct integral of a suitable family of partial operators. We prove that symmetry preserving self-adjoint extensions of the initial operator are in a one-to-one correspondence with measurable families of self-adjoint extensions of partial operators obtained by reduction. The general scheme is applied to themore » three-dimensional Aharonov-Bohm Hamiltonian describing the electron in the magnetic field of an infinitely thin solenoid. We construct all self-adjoint extensions of this Hamiltonian, invariant under translations along the solenoid and rotations around it, and explicitly find their eigenfunction expansions.« less

  4. 3D CSEM inversion based on goal-oriented adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.

  5. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  6. Use of Existing CAD Models for Radiation Shielding Analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.

    2015-01-01

    The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.

  7. Experimental approach to measure thick target neutron yields induced by heavy ions for shielding

    NASA Astrophysics Data System (ADS)

    Trinh, N. D.; Fadil, M.; Lewitowicz, M.; Brouillard, C.; Clerc, T.; Damoy, S.; Desmezières, V.; Dessay, E.; Dupuis, M.; Grinyer, G. F.; Grinyer, J.; Jacquot, B.; Ledoux, X.; Madeline, A.; Menard, N.; Michel, M.; Morel, V.; Porée, F.; Rannou, B.; Savalle, A.

    2017-09-01

    Double differential (angular and energy) neutron distributions were measured using an activation foil technique. Reactions were induced by impinging two low-energy heavy-ion beams accelerated with the GANIL CSS1 cyclotron: (36S (12 MeV/u) and 208Pb (6.25 MeV/u)) onto thick natCu targets. Results have been compared to Monte-Carlo calculations from two codes (PHITS and FLUKA) for the purpose of benchmarking radiation protection and shielding requirements. This comparison suggests a disagreement between calculations and experiment, particularly for high-energy neutrons.

  8. Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield

    NASA Astrophysics Data System (ADS)

    Cramer, S. N.; Roussin, R. W.

    1981-11-01

    A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.

  9. Recent Developments in Three Dimensional Radiation Transport Using the Green's Function Technique

    NASA Technical Reports Server (NTRS)

    Rockell, Candice; Tweed, John; Blattnig, Steve R.; Mertens, Christopher J.

    2010-01-01

    In the future, astronauts will be sent into space for longer durations of time compared to previous missions. The increased risk of exposure to dangerous radiation, such as Galactic Cosmic Rays and Solar Particle Events, is of great concern. Consequently, steps must be taken to ensure astronaut safety by providing adequate shielding. In order to better determine and verify shielding requirements, an accurate and efficient radiation transport code based on a fully three dimensional radiation transport model using the Green's function technique is being developed

  10. Space Debris Surfaces - Probability of no penetration versus impact velocity and obliquity

    NASA Technical Reports Server (NTRS)

    Elfer, N.; Meibaum, R.; Olsen, G.

    1992-01-01

    A collection of computer codes called Space Debris Surfaces (SD-SURF), have been developed to assist in the design and analysis of space debris protection systems. An SD-SURF analysis will show which obliquities and velocities are most likely to cause a penetration to help the analyst select a shield design best suited to the predominant penetration mechanism. Examples of the interaction between space vehicle geometry, the space debris environment, and the penetration and critical damage ballistic limit surfaces of the shield under consideration are presented.

  11. Galactic cosmic ray transport methods and radiation quality issues

    NASA Technical Reports Server (NTRS)

    Townsend, L. W.; Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.

    1992-01-01

    An overview of galactic cosmic ray (GCR) interaction and transport methods, as implemented in the Langley Research Center GCR transport code, is presented. Representative results for solar minimum, exo-magnetospheric GCR dose equivalents in water are presented on a component by component basis for various thicknesses of aluminum shielding. The impact of proposed changes to the currently used quality factors on exposure estimates and shielding requirements are quantified. Using the cellular track model of Katz, estimates of relative biological effectiveness (RBE) for the mixed GCR radiation fields are also made.

  12. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    PubMed Central

    Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.

    2010-01-01

    Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118

  13. The Low-Noise Potential of Distributed Propulsion on a Catamaran Aircraft

    NASA Technical Reports Server (NTRS)

    Posey, Joe W.; Tinetti, A. F.; Dunn, M. H.

    2006-01-01

    The noise shielding potential of an inboard-wing catamaran aircraft when coupled with distributed propulsion is examined. Here, only low-frequency jet noise from mid-wing-mounted engines is considered. Because low frequencies are the most difficult to shield, these calculations put a lower bound on the potential shielding benefit. In this proof-of-concept study, simple physical models are used to describe the 3-D scattering of jet noise by conceptualized catamaran aircraft. The Fast Scattering Code is used to predict noise levels on and about the aircraft. Shielding results are presented for several catamaran type geometries and simple noise source configurations representative of distributed propulsion radiation. Computational analyses are presented that demonstrate the shielding benefits of distributed propulsion and of increasing the width of the inboard wing. Also, sample calculations using the FSC are presented that demonstrate additional noise reduction on the aircraft fuselage by the use of acoustic liners on the inboard wing trailing edge. A full conceptual aircraft design would have to be analyzed over a complete mission to more accurately quantify community noise levels and aircraft performance, but the present shielding calculations show that a large acoustic benefit could be achieved by combining distributed propulsion and liner technology with a twin-fuselage planform.

  14. Mitigating the Effects of the Space Radiation Environment: A Novel Approach of Using Graded-Z Materials

    NASA Technical Reports Server (NTRS)

    Atwell, William; Rojdev, Kristina; Aghara, Sukesh; Sriprisan, Sirikul

    2013-01-01

    In this paper we present a novel space radiation shielding approach using various material lay-ups, called "Graded-Z" shielding, which could optimize cost, weight, and safety while mitigating the radiation exposures from the trapped radiation and solar proton environments, as well as the galactic cosmic radiation (GCR) environment, to humans and electronics. In addition, a validation and verification (V&V) was performed using two different high energy particle transport/dose codes (MCNPX & HZETRN). Inherently, we know that materials having high-hydrogen content are very good space radiation shielding materials. Graded-Z material lay-ups are very good trapped electron mitigators for medium earth orbit (MEO) and geostationary earth orbit (GEO). In addition, secondary particles, namely neutrons, are produced as the primary particles penetrate a spacecraft, which can have deleterious effects to both humans and electronics. The use of "dopants," such as beryllium, boron, and lithium, impregnated in other shielding materials provides a means of absorbing the secondary neutrons. Several examples of optimized Graded-Z shielding layups that include the use of composite materials are presented and discussed in detail. This parametric shielding study is an extension of some earlier pioneering work we (William Atwell and Kristina Rojdev) performed in 20041 and 20092.

  15. Adjoint-Based Uncertainty Quantification with MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less

  16. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  17. Stratospheric Water Vapor and the Asian Monsoon: An Adjoint Model Investigation

    NASA Technical Reports Server (NTRS)

    Olsen, Mark A.; Andrews, Arlyn E.

    2003-01-01

    A new adjoint model of the Goddard Parameterized Chemistry and Transport Model is used to investigate the role that the Asian monsoon plays in transporting water to the stratosphere. The adjoint model provides a unique perspective compared to non-diffusive and non-mixing Lagrangian trajectory analysis. The quantity of water vapor transported from the monsoon and the pathways into the stratosphere are examined. The emphasis is on the amount of water originating from the monsoon that contributes to the tropical tape recorder signal. The cross-tropopause flux of water from the monsoon to the midlatitude lower stratosphere will also be discussed.

  18. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2015-04-01

    Tangent linear and adjoint models (TAMs) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristic vectors. A TAM is also required by the 4D-Var algorithm, which is one of the major methods in data assimilation. This paper describes the development and the validation of the tangent linear and adjoint model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed, and several applications are also presented.

  19. NEMOTAM: tangent and adjoint models for the ocean modelling platform NEMO

    NASA Astrophysics Data System (ADS)

    Vidard, A.; Bouttier, P.-A.; Vigilant, F.

    2014-10-01

    The tangent linear and adjoint model (TAM) are efficient tools to analyse and to control dynamical systems such as NEMO. They can be involved in a large range of applications such as sensitivity analysis, parameter estimation or the computation of characteristics vectors. TAM is also required by the 4-D-VAR algorithm which is one of the major method in Data Assimilation. This paper describes the development and the validation of the Tangent linear and Adjoint Model for the NEMO ocean modelling platform (NEMOTAM). The diagnostic tools that are available alongside NEMOTAM are detailed and discussed and several applications are also presented.

  20. On universal knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mkrtchyan, R.; Morozov, A.

    2016-02-01

    We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.

  1. Prompt radiation, shielding and induced radioactivity in a high-power 160 MeV proton linac

    NASA Astrophysics Data System (ADS)

    Magistris, Matteo; Silari, Marco

    2006-06-01

    CERN is designing a 160 MeV proton linear accelerator, both for a future intensity upgrade of the LHC and as a possible first stage of a 2.2 GeV superconducting proton linac. A first estimate of the required shielding was obtained by means of a simple analytical model. The source terms and the attenuation lengths used in the present study were calculated with the Monte Carlo cascade code FLUKA. Detailed FLUKA simulations were performed to investigate the contribution of neutron skyshine and backscattering to the expected dose rate in the areas around the linac tunnel. An estimate of the induced radioactivity in the magnets, vacuum chamber, the cooling system and the concrete shield was performed. A preliminary thermal study of the beam dump is also discussed.

  2. Metal Hydrides, MOFs, and Carbon Composites as Space Radiation Shielding Mitigators

    NASA Technical Reports Server (NTRS)

    Atwell, William; Rojdev, Kristina; Liang, Daniel; Hill, Matthew

    2014-01-01

    Recently, metal hydrides and MOFs (Metal-Organic Framework/microporous organic polymer composites - for their hydrogen and methane storage capabilities) have been studied with applications in fuel cell technology. We have investigated a dual-use of these materials and carbon composites (CNT-HDPE) to include space radiation shielding mitigation. In this paper we present the results of a detailed study where we have analyzed 64 materials. We used the Band fit spectra for the combined 19-24 October 1989 solar proton events as the input source term radiation environment. These computational analyses were performed with the NASA high energy particle transport/dose code HZETRN. Through this analysis we have identified several of the materials that have excellent radiation shielding properties and the details of this analysis will be discussed further in the paper.

  3. Shielding design for the front end of the CERN SPL.

    PubMed

    Magistris, Matteo; Silari, Marco; Vincke, Helmut

    2005-01-01

    CERN is designing a 2.2-GeV Superconducting Proton Linac (SPL) with a beam power of 4 MW, to be used for the production of a neutrino superbeam. The SPL front end will initially accelerate 2 x 10(14) negative hydrogen ions per second up to an energy of 120 MeV. The FLUKA Monte Carlo code was employed for shielding design. The proposed shielding is a combined iron-concrete structure, which also takes into consideration the required RF wave-guide ducts and access labyrinths to the machine. Two beam-loss scenarios were investigated: (1) constant beam loss of 1 Wm(-1) over the whole accelerator length and (2) full beam loss occurring at various locations. A comparison with results based on simplified approaches is also presented.

  4. Adjoint Sensitivity Analysis of Radiative Transfer Equation: Temperature and Gas Mixing Ratio Weighting Functions for Remote Sensing of Scattering Atmospheres in Thermal IR

    NASA Technical Reports Server (NTRS)

    Ustinov, E.

    1999-01-01

    Sensitivity analysis based on using of the adjoint equation of radiative transfer is applied to the case of atmospheric remote sensing in the thermal spectral region with non-negligeable atmospheric scattering.

  5. Toward regional-scale adjoint tomography in the deep earth

    NASA Astrophysics Data System (ADS)

    Masson, Y.; Romanowicz, B. A.

    2013-12-01

    Thanks to the development of efficient numerical computation methods, such as the Spectral Element Method (SEM) and to the increasing power of computer clusters, it is now possible to obtain regional-scale images of the Earth's interior using adjoint-tomography (e.g. Tape, C., et al., 2009). As for now, these tomographic models are limited to the upper layers of the earth, i.e., they provide us with high-resolution images of the crust and the upper part of the mantle. Given the gigantic amount of calculation it represents, obtaing similar models at the global scale (i.e. images of the entire Earth) seems out of reach at the moment. Furthermore, it's likely that the first generation of such global adjoint tomographic models will have a resolution significantly smaller than the current regional models. In order to image regions of interests in the deep Earth, such as plumes, slabs or large low shear velocity provinces (LLSVPs), while keeping the computation tractable, we are developing new tools that will allow us to perform regional-scale adjoint-tomography at arbitrary depths. In a recent study (Masson et al., 2013), we showed that a numerical equivalent of the time reversal mirrors used in experimental acoustics permits to confine the wave propagation computations (i.e. using SEM simulations) inside the region to be imaged. With this ability to limit wave propagation modeling inside a region of interest, obtaining the adjoint sensitivity kernels needed for tomographic imaging is only two steps further. First, the local wavefield modeling needs to be coupled with field extrapolation techniques in order to obtain synthetic seismograms at the surface of the earth. These seismograms will account for the 3D structure inside the region of interest in a quasi-exact manner. We will present preliminary results where the field-extrapolation is performed using Green's function computed in a 1D Earth model thanks to the Direct Solution Method (DSM). Once synthetic seismograms can be obtained, it is possible to evaluate the misfit between observed and computed seismograms. The second step will then be to extrapolate the misfit function back into the SEM region in order to compute local adjoint sensitivity kernels. When available, these kernels will allow us to perform regional-scale adjoint tomography at arbitrary locations inside the earth. Masson Y., Cupillard P., Capdeville Y., & Romanowicz B., 2013. On the numerical implementation of time-reversal mirrors for tomographic imaging, Journal of Geophysical Research (under review). Tape, C., et al. (2009). "Adjoint tomography of the southern California crust." Science 325(5943): 988-992.

  6. Heavy ion contributions to organ dose equivalent for the 1977 galactic cosmic ray spectrum

    NASA Astrophysics Data System (ADS)

    Walker, Steven A.; Townsend, Lawrence W.; Norbury, John W.

    2013-05-01

    Estimates of organ dose equivalents for the skin, eye lens, blood forming organs, central nervous system, and heart of female astronauts from exposures to the 1977 solar minimum galactic cosmic radiation spectrum for various shielding geometries involving simple spheres and locations within the Space Transportation System (space shuttle) and the International Space Station (ISS) are made using the HZETRN 2010 space radiation transport code. The dose equivalent contributions are broken down by charge groups in order to better understand the sources of the exposures to these organs. For thin shields, contributions from ions heavier than alpha particles comprise at least half of the organ dose equivalent. For thick shields, such as the ISS locations, heavy ions contribute less than 30% and in some cases less than 10% of the organ dose equivalent. Secondary neutron production contributions in thick shields also tend to be as large, or larger, than the heavy ion contributions to the organ dose equivalents.

  7. [Shielding effect of clinical X-ray protector and lead glass against annihilation radiation and gamma rays of 99mTc].

    PubMed

    Fukuda, Atsushi; Koshida, Kichiro; Yamaguchi, Ichiro; Takahashi, Masaaki; Kitabayashi, Keitarou; Matsubara, Kousuke; Noto, Kimiya; Kawabata, Chikako; Nakagawa, Hiroto

    2004-12-01

    Various pharmaceutical companies in Japan are making radioactive drugs available for positron emission tomography (PET) in hospitals without a cyclotron. With the distribution of these drugs to hospitals, medical check-ups and examinations using PET are expected to increase. However, the safety guidelines for radiation in the new deployment of PET have not been adequately improved. Therefore, we measured the shielding effect of a clinical X-ray protector and lead glass against annihilation radiation and gamma rays of (99m)Tc. We then calculated the shielding effect of a 0.25 mm lead protector, 1 mm lead, and lead glass using the EGS4 (Electron Gamma Shower Version 4) code. The shielding effects of 22-mm lead glass against annihilation radiation and gamma rays of (99m)Tc were approximately 31.5% and 93.3%, respectively. The clinical X-ray protector against annihilation radiation approximately doubled the skin-absorbed dose.

  8. The development of global GRAPES 4DVAR

    NASA Astrophysics Data System (ADS)

    Liu, Yongzhu

    2017-04-01

    Four-dimensional variation data assimilation (4DVAR) has given a great contribution to the improvement of NWP system over the past twenty years. Therefore, our strategy is to develop an operational global 4D-Var system from the outset. The aim at the paper is to introduce the development of the global GRAPES four-dimensional variation data assimilation (4DVAR) using incremental analysis schemes and to presents results of a comparison between 4DVAR using 6-hour assimilation window and simplified physics during the minimization with three-dimensional variation data assimilation (3DVAR). The dynamical cores of the tangent-linear and adjoint models are developed directly based on the non-hydrostatic forecast model. In addition, the standard correctness checks have been performed. As well as the development adjoint codes, most of our work is focused on improving the computational efficiency since the bulk of the computational cost of 4D-Var is in the integration of the tangent-linear and adjoint models. In terms of tangent-linear model, the wall-clock time is reduced to about 1.2 times as much as one of nonlinear model through the optimizing of the software framework. The significant computational cost savings on adjoint model result from the removing the redundant recompilations of model trajectories. It is encouraging that the wall-clock time of adjoint model is less than 1.5 times as much as one of nonlinear model. The current difficulty is that the numerical scheme used within the linear model is based on strategically on the numeric of the corresponding nonlinear model. Further computational acceleration should be expected from the improvement on nonlinear numerical algorithm. A series of linearized physical parameterization schemes has been developed to improve the representation of perturbed fields in the linear model. It consists of horizontal and vertical diffusion, sub-grid scale orographic gravity wave drag, large-scale condensation and cumulus convection schemes. We also found the straightforward linearization based on the nonlinear physical scheme might lead to significant growing of spurious unstable perturbations. It is essential to simplify the linear physics with respect to the non-linear schemes. The improvement on the perturbed fields in the tangent-linear model is visible with the linear physics included, especially at the low level. GRAPES variation data assimilation system adopts the incremental approach. The work is ongoing to develop a pre-operational 4DVAR suite with 0.25° outer loop resolution and multiple outer-loops configurations. One 4DVAR analysis using 6-hour assimilation windows can be finished within 40-minutes when using the available conventional and satellite data. In summary, it was found that the analysis over the northern, southern hemispheres, tropical region and East Asian area of GRAPES 4DVAR performed better than GRAPES 3DVAR for one month experiments. Moreover, the forecast results show that northern and southern extra-tropical scores for GRAPES 4DVAR are already better than GRAPES 3DVAR, but the tropical performance needs further investigations. Therefore, the subsequent main improvements will aim to enhance its computational efficiency and accuracy in 2017. The global GRAPES 4DVAR is planned for operation in 2018.

  9. An Adjoint-Based Analysis of the Sampling Footprints of Tall Tower, Aircraft and Potential Future Lidar Observations of CO2

    NASA Technical Reports Server (NTRS)

    Andrews, Arlyn; Kawa, Randy; Zhu, Zhengxin; Burris, John; Abshire, Jim

    2004-01-01

    A detailed mechanistic understanding of the sources and sinks of CO2 will be required to reliably predict future CO2 levels and climate. A commonly used technique for deriving information about CO2 exchange with surface reservoirs is to solve an 'inverse problem', where CO2 observations are used with an atmospheric transport model to find the optimal distribution of sources and sinks. Synthesis inversion methods are powerful tools for addressing this question, but the results are disturbingly sensitive to the details of the calculation. Studies done using different atmospheric transport models and combinations of surface station data have produced substantially different distributions of surface fluxes. Adjoint methods are now being developed that will more effectively incorporate diverse datasets in estimates of surface fluxes of CO2. In an adjoint framework, it will be possible to combine CO2 concentration data from longterm surface and aircraft monitoring stations with data from intensive field campaigns and with proposed future satellite observations. We have recently developed an adjoint for the GSFC 3-D Parameterized Chemistry and Transport Model (PCTM). Here, we will present results from a PCTM Adjoint study comparing the sampling footprints of tall tower, aircraft and potential future lidar observations of CO2. The vertical resolution and extent of the profiles and the observation frequency will be considered for several sites in North America.

  10. Double-Difference Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Orsvuran, R.; Bozdag, E.; Lei, W.; Tromp, J.

    2017-12-01

    The adjoint method allows us to incorporate full waveform simulations in inverse problems. Misfit functions play an important role in extracting the relevant information from seismic waveforms. In this study, our goal is to apply the Double-Difference (DD) methodology proposed by Yuan et al. (2016) to global adjoint tomography. Dense seismic networks, such as USArray, lead to higher-resolution seismic images underneath continents. However, the imbalanced distribution of stations and sources poses challenges in global ray coverage. We adapt double-difference multitaper measurements to global adjoint tomography. We normalize each DD measurement by its number of pairs, and if a measurement has no pair, as may frequently happen for data recorded at oceanic stations, classical multitaper measurements are used. As a result, the differential measurements and pair-wise weighting strategy help balance uneven global kernel coverage. Our initial experiments with minor- and major-arc surface waves show promising results, revealing more pronounced structure near dense networks while reducing the prominence of paths towards cluster of stations. We have started using this new measurement in global adjoint inversions, addressing azimuthal anisotropy in upper mantle. Meanwhile, we are working on combining the double-difference approach with instantaneous phase measurements to emphasize contributions of scattered waves in global inversions and extending it to body waves. We will present our results and discuss challenges and future directions in the context of global tomographic inversions.

  11. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less

  12. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-12-29

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less

  13. Role of shielding in modulating the effects of solar particle events: Monte Carlo calculation of absorbed dose and DNA complex lesions in different organs

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.; hide

    2004-01-01

    Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes--such as FLUKA--yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy-1 Da-1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for the absorbed dose, due to their higher LET and thus higher biological effectiveness. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  14. Role of shielding in modulating the effects of solar particle events: Monte Carlo calculation of absorbed dose and DNA complex lesions in different organs

    NASA Astrophysics Data System (ADS)

    Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.; Zankl, M.

    2004-01-01

    Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes - such as FLUKA - yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy -1 Da -1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm 2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for the absorbed dose, due to their higher LET and thus higher biological effectiveness.

  15. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  16. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  17. Experimental Shielding Evaluation of the Radiation Protection Provided by Residential Structures

    NASA Astrophysics Data System (ADS)

    Dickson, Elijah D.

    The human health and environmental effects following a postulated accidental release of radioactive material to the environment has been a public and regulatory concern since the early development of nuclear technology and researched extensively to better understand the potential risks for accident mitigation and emergency planning purposes. The objective of this investigation is to research and develop the technical basis for contemporary building shielding factors for the U.S. housing stock. Building shielding factors quantify the protection a certain building-type provides from ionizing radiation. Much of the current data used to determine the quality of shielding around nuclear facilities and urban environments is based on simplistic point-kernel calculations for 1950's era suburbia and is no longer applicable to the densely populated urban environments seen today. To analyze a building's radiation shielding properties, the ideal approach would be to subject a variety of building-types to various radioactive materials and measure the radiation levels in and around the building. While this is not entirely practicable, this research uniquely analyzes the shielding effectiveness of a variety of likely U.S. residential buildings from a realistic source term in a laboratory setting. Results produced in the investigation provide a comparison between theory and experiment behind building shielding factor methodology by applying laboratory measurements to detailed computational models. These models are used to develop a series of validated building shielding factors for generic residential housing units using the computational code MCNP5. For these building shielding factors to be useful in radiologic consequence assessments and emergency response planning, two types of shielding factors have been developed for; (1) the shielding effectiveness of each structure within a semi-infinite cloud of radioactive material, and (2) the shielding effectiveness of each structure from contaminant deposition on the roof and surrounding surfaces. For example, results from this investigation estimate the building shielding factors from a semi-infinite plume between comparable two-story models with a basement constructed with either brick-and-mortar or vinyl siding composing the exterior wall weather and a typical single-wide manufactured home with vinyl siding to be 0.36, 0.65, and 0.82 respectively.

  18. HZETRN: A heavy ion/nucleon transport code for space radiations

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.

    1991-01-01

    The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.

  19. Morse Monte Carlo Radiation Transport Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emmett, M.B.

    1975-02-01

    The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one maymore » determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)« less

  20. Self-adjoint realisations of the Dirac-Coulomb Hamiltonian for heavy nuclei

    NASA Astrophysics Data System (ADS)

    Gallone, Matteo; Michelangeli, Alessandro

    2018-02-01

    We derive a classification of the self-adjoint extensions of the three-dimensional Dirac-Coulomb operator in the critical regime of the Coulomb coupling. Our approach is solely based upon the Kreĭn-Višik-Birman extension scheme, or also on Grubb's universal classification theory, as opposite to previous works within the standard von Neumann framework. This let the boundary condition of self-adjointness emerge, neatly and intrinsically, as a multiplicative constraint between regular and singular part of the functions in the domain of the extension, the multiplicative constant giving also immediate information on the invertibility property and on the resolvent and spectral gap of the extension.

  1. Application of Adjoint Methodology to Supersonic Aircraft Design Using Reversed Equivalent Areas

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2013-01-01

    This paper presents an approach to shape an aircraft to equivalent area based objectives using the discrete adjoint approach. Equivalent areas can be obtained either using reversed augmented Burgers equation or direct conversion of off-body pressures into equivalent area. Formal coupling with CFD allows computation of sensitivities of equivalent area objectives with respect to aircraft shape parameters. The exactness of the adjoint sensitivities is verified against derivatives obtained using the complex step approach. This methodology has the benefit of using designer-friendly equivalent areas in the shape design of low-boom aircraft. Shape optimization results with equivalent area cost functionals are discussed and further refined using ground loudness based objectives.

  2. Comparison of Organ Dosimetry for Astronaut Phantoms: Earth-Based vs. Microgravity-Based Anthropometry and Body Positioning

    NASA Technical Reports Server (NTRS)

    VanBaalen, Mary; Bahadon, Amir; Shavers, Mark; Semones, Edward

    2011-01-01

    The purpose of this study is to use NASA radiation transport codes to compare astronaut organ dose equivalents resulting from solar particle events (SPE), geomagnetically trapped protons, and free-space galactic cosmic rays (GCR) using phantom models representing Earth-based and microgravity-based anthropometry and positioning. Methods: The Univer sity of Florida hybrid adult phantoms were scaled to represent male and female astronauts with 5th, 50th, and 95th percentile heights and weights as measured on Earth. Another set of scaled phantoms, incorporating microgravity-induced changes, such as spinal lengthening, leg volume loss, and the assumption of the neutral body position, was also created. A ray-tracer was created and used to generate body self-shielding distributions for dose points within a voxelized phantom under isotropic irradiation conditions, which closely approximates the free-space radiation environment. Simplified external shielding consisting of an aluminum spherical shell was used to consider the influence of a spacesuit or shielding of a hull. These distributions were combined with depth dose distributions generated from the NASA radiation transport codes BRYNTRN (SPE and trapped protons) and HZETRN (GCR) to yield dose equivalent. Many points were sampled per organ. Results: The organ dos e equivalent rates were on the order of 1.5-2.5 mSv per day for GCR (1977 solar minimum) and 0.4-0.8 mSv per day for trapped proton irradiation with shielding of 2 g cm-2 aluminum equivalent. The organ dose equivalents for SPE irradiation varied considerably, with the skin and eye lens having the highest organ dose equivalents and deep-seated organs, such as the bladder, liver, and stomach having the lowest. Conclus ions: The greatest differences between the Earth-based and microgravity-based phantoms are observed for smaller ray thicknesses, since the most drastic changes involved limb repositioning and not overall phantom size. Improved self-shielding models reduce the overall uncertainty in organ dosimetry for mission-risk projections and assessments for astronauts

  3. The Gauss-Bonnet operator of an infinite graph

    NASA Astrophysics Data System (ADS)

    Anné, Colette; Torki-Hamza, Nabila

    2015-06-01

    We propose a general condition, to ensure essential self-adjointness for the Gauss-Bonnet operator , based on a notion of completeness as Chernoff. This gives essential self-adjointness of the Laplace operator both for functions and 1-forms on infinite graphs. This is used to extend Flanders result concerning solutions of Kirchhoff's laws.

  4. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  5. Unsteady Adjoint Approach for Design Optimization of Flapping Airfoils

    NASA Technical Reports Server (NTRS)

    Lee, Byung Joon; Liou, Meng-Sing

    2012-01-01

    This paper describes the work for optimizing the propulsive efficiency of flapping airfoils, i.e., improving the thrust under constraining aerodynamic work during the flapping flights by changing their shape and trajectory of motion with the unsteady discrete adjoint approach. For unsteady problems, it is essential to properly resolving time scales of motion under consideration and it must be compatible with the objective sought after. We include both the instantaneous and time-averaged (periodic) formulations in this study. For the design optimization with shape parameters or motion parameters, the time-averaged objective function is found to be more useful, while the instantaneous one is more suitable for flow control. The instantaneous objective function is operationally straightforward. On the other hand, the time-averaged objective function requires additional steps in the adjoint approach; the unsteady discrete adjoint equations for a periodic flow must be reformulated and the corresponding system of equations solved iteratively. We compare the design results from shape and trajectory optimizations and investigate the physical relevance of design variables to the flapping motion at on- and off-design conditions.

  6. [Purifying process of gynostemma pentaphyllum saponins based on "adjoint marker" online control technology and identification of their compositions by UPLC-QTOF-MS].

    PubMed

    Fan, Dong-Dong; Kuang, Yan-Hui; Dong, Li-Hua; Ye, Xiao; Chen, Liang-Mian; Zhang, Dong; Ma, Zhen-Shan; Wang, Jin-Yu; Zhu, Jing-Jing; Wang, Zhi-Min; Wang, De-Qin; Li, Chu-Yuan

    2017-04-01

    To optimize the purification process of gynostemma pentaphyllum saponins (GPS) based on "adjoint marker" online control technology with GPS as the testing index. UPLC-QTOF-MS technology was used for qualitative analysis. "Adjoint marker" online control results showed that the end point of load sample was that the UV absorbance of effluent liquid was equal to half of that of load sample solution, and the absorbance was basically stable when the end point was stable. In UPLC-QTOF-MS qualitative analysis, 16 saponins were identified from GPS, including 13 known gynostemma saponins and 3 new saponins. This optimized method was proved to be simple, scientific, reasonable, easy for online determination, real-time record, and can be better applied to the mass production and automation of production. The results of qualitative analysis indicated that the "adjoint marker" online control technology can well retain main efficacy components of medicinal materials, and provide analysis tools for the process control and quality traceability. Copyright© by the Chinese Pharmaceutical Association.

  7. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  8. An optimized treatment for algorithmic differentiation of an important glaciological fixed-point problem

    DOE PAGES

    Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...

    2016-05-20

    We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less

  9. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  10. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  11. Radiation transport calculations for cosmic radiation.

    PubMed

    Endo, A; Sato, T

    2012-01-01

    The radiation environment inside and near spacecraft consists of various components of primary radiation in space and secondary radiation produced by the interaction of the primary radiation with the walls and equipment of the spacecraft. Radiation fields inside astronauts are different from those outside them, because of the body's self-shielding as well as the nuclear fragmentation reactions occurring in the human body. Several computer codes have been developed to simulate the physical processes of the coupled transport of protons, high-charge and high-energy nuclei, and the secondary radiation produced in atomic and nuclear collision processes in matter. These computer codes have been used in various space radiation protection applications: shielding design for spacecraft and planetary habitats, simulation of instrument and detector responses, analysis of absorbed doses and quality factors in organs and tissues, and study of biological effects. This paper focuses on the methods and computer codes used for radiation transport calculations on cosmic radiation, and their application to the analysis of radiation fields inside spacecraft, evaluation of organ doses in the human body, and calculation of dose conversion coefficients using the reference phantoms defined in ICRP Publication 110. Copyright © 2012. Published by Elsevier Ltd.

  12. Measured and calculated fast neutron spectra in a depleted uranium and lithium hydride shielded reactor

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.; Mueller, R. A.

    1973-01-01

    Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.

  13. Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee

    2011-01-01

    The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.

  14. GEANT4 benchmark with MCNPX and PHITS for activation of concrete

    NASA Astrophysics Data System (ADS)

    Tesse, Robin; Stichelbaut, Frédéric; Pauly, Nicolas; Dubus, Alain; Derrien, Jonathan

    2018-02-01

    The activation of concrete is a real problem from the point of view of waste management. Because of the complexity of the issue, Monte Carlo (MC) codes have become an essential tool to its study. But various codes or even nuclear models exist in MC. MCNPX and PHITS have already been validated for shielding studies but GEANT4 is also a suitable solution. In these codes, different models can be considered for a concrete activation study. The Bertini model is not the best model for spallation while BIC and INCL model agrees well with previous results in literature.

  15. SKYDOSE: A code for gamma skyshine calculations using the integral line-beam method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Brockhoff, R.C.

    1994-07-01

    SKYDOS evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated by three simple geometries: (1) a source in a silo; (2) a source behind an infinitely long, vertical, black wall; and (3) a source in a rectangular building. In all three geometries, an optical overhead shield may be specified. The source energy must be between 0.02 and 100 MeV (10 MeV for sources with an overhead shield). This is a user`s manual. Other references give more detail on the integral line-beam method used by SKYDOSE.

  16. Variational data assimilation for the initial-value dynamo problem.

    PubMed

    Li, Kuan; Jackson, Andrew; Livermore, Philip W

    2011-11-01

    The secular variation of the geomagnetic field as observed at the Earth's surface results from the complex magnetohydrodynamics taking place in the fluid core of the Earth. One way to analyze this system is to use the data in concert with an underlying dynamical model of the system through the technique of variational data assimilation, in much the same way as is employed in meteorology and oceanography. The aim is to discover an optimal initial condition that leads to a trajectory of the system in agreement with observations. Taking the Earth's core to be an electrically conducting fluid sphere in which convection takes place, we develop the continuous adjoint forms of the magnetohydrodynamic equations that govern the dynamical system together with the corresponding numerical algorithms appropriate for a fully spectral method. These adjoint equations enable a computationally fast iterative improvement of the initial condition that determines the system evolution. The initial condition depends on the three dimensional form of quantities such as the magnetic field in the entire sphere. For the magnetic field, conservation of the divergence-free condition for the adjoint magnetic field requires the introduction of an adjoint pressure term satisfying a zero boundary condition. We thus find that solving the forward and adjoint dynamo system requires different numerical algorithms. In this paper, an efficient algorithm for numerically solving this problem is developed and tested for two illustrative problems in a whole sphere: one is a kinematic problem with prescribed velocity field, and the second is associated with the Hall-effect dynamo, exhibiting considerable nonlinearity. The algorithm exhibits reliable numerical accuracy and stability. Using both the analytical and the numerical techniques of this paper, the adjoint dynamo system can be solved directly with the same order of computational complexity as that required to solve the forward problem. These numerical techniques form a foundation for ultimate application to observations of the geomagnetic field over the time scale of centuries.

  17. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  18. Naphthalene Planar Laser-Induced Fluorescence Imaging of Orion Multi-Purpose Crew Vehicle Heat Shield Ablation Products

    NASA Astrophysics Data System (ADS)

    Combs, Christopher S.; Clemens, Noel T.; Danehy, Paul M.

    2013-11-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) calls for an ablative heat shield. In order to better design this heat shield and others that will undergo planetary entry, an improved understanding of the ablation process is required. Given that ablation is a multi-physics process involving heat and mass transfer, codes aiming to predict heat shield ablation are in need of experimental data pertaining to the turbulent transport of ablation products for validation. At The University of Texas at Austin, a technique is being developed that uses planar laser-induced fluorescence (PLIF) of a low-temperature sublimating ablator (naphthalene) to visualize the transport of ablation products in a supersonic flow. Since ablation at reentry temperatures can be difficult to recreate in a laboratory setting it is desirable to create a limited physics problem and simulate the ablation process at relatively low temperature conditions using naphthalene. A scaled Orion MPCV model with a solid naphthalene heat shield has been tested in a Mach 5 wind tunnel at various angles of attack in the current work. PLIF images have shown high concentrations of scalar in the capsule wake region, intermittent turbulent structures on the heat shield surface, and interesting details of the capsule shear layer structure. This work was supported by a NASA Office of the Chief Technologist's Space Technology Research Fellowship (NNX11AN55H).

  19. Determining optical and radiation characteristics of cathode ray tubes' glass to be reused as radiation shielding glass

    NASA Astrophysics Data System (ADS)

    Zughbi, A.; Kharita, M. H.; Shehada, A. M.

    2017-07-01

    A new method of recycling glass of Cathode Ray Tubes (CRTs) has been presented in this paper. The glass from CRTs suggested being used as raw materials for the production of radiation shielding glass. Cathode ray tubes glass contains considerable amounts of environmentally hazardous toxic wastes, namely heavy metal oxides such as lead oxide (PbO). This method makes CRTs glass a favorable choice to be used as raw material for Radiation Shielding Glass and concrete. The heavy metal oxides increase its density, which make this type of glass nearly equivalent to commercially available shielding glass. CRTs glass have been characterized to determine heavy oxides content, density, refractive index, and radiation shielding properties for different Gamma-Ray energies. Empirical methods have been used by using the Gamma-Ray source cobalt-60 and computational method by using the code XCOM. Measured and calculated values were in a good compatibility. The effects of irradiation by gamma rays of cobalt-60 on the optical transparency for each part of the CRTs glass have been studied. The Results had shown that some parts of CRTs glass have more resistant to Gamma radiation than others. The study had shown that the glass of cathode ray tubes could be recycled to be used as radiation shielding glass. This proposed use of CRT glass is only limited to the available quantity of CRT world-wide.

  20. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE PAGES

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; ...

    2017-07-17

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  1. A Multigrid NLS-4DVar Data Assimilation Scheme with Advanced Research WRF (ARW)

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Tian, X.

    2017-12-01

    The motions of the atmosphere have multiscale properties in space and/or time, and the background error covariance matrix (Β) should thus contain error information at different correlation scales. To obtain an optimal analysis, the multigrid three-dimensional variational data assimilation scheme is used widely when sequentially correcting errors from large to small scales. However, introduction of the multigrid technique into four-dimensional variational data assimilation is not easy, due to its strong dependence on the adjoint model, which has extremely high computational costs in data coding, maintenance, and updating. In this study, the multigrid technique was introduced into the nonlinear least-squares four-dimensional variational assimilation (NLS-4DVar) method, which is an advanced four-dimensional ensemble-variational method that can be applied without invoking the adjoint models. The multigrid NLS-4DVar (MG-NLS-4DVar) scheme uses the number of grid points to control the scale, with doubling of this number when moving from a coarse to a finer grid. Furthermore, the MG-NLS-4DVar scheme not only retains the advantages of NLS-4DVar, but also sufficiently corrects multiscale errors to achieve a highly accurate analysis. The effectiveness and efficiency of the proposed MG-NLS-4DVar scheme were evaluated by several groups of observing system simulation experiments using the Advanced Research Weather Research and Forecasting Model. MG-NLS-4DVar outperformed NLS-4DVar, with a lower computational cost.

  2. Modeling Finite Faults Using the Adjoint Wave Field

    NASA Astrophysics Data System (ADS)

    Hjörleifsdóttir, V.; Liu, Q.; Tromp, J.

    2004-12-01

    Time-reversal acoustics, a technique in which an acoustic signal is recorded by an array of transducers, time-reversed, and retransmitted, is used, e.g., in medical therapy to locate and destroy gallstones (for a review see Fink, 1997). As discussed by Tromp et al. (2004), time-reversal techniques for locating sources are closely linked to so-called `adjoint methods' (Talagrand and Courtier, 1987), which may be used to evaluate the gradient of a misfit function. Tromp et al. (2004) illustrate how a (finite) source inversion may be implemented based upon the adjoint wave field by writing the change in the misfit function, δ χ, due to a change in the moment-density tensor, δ m, as an integral of the adjoint strain field ɛ x,t) over the fault plane Σ : δ χ = ∫ 0T∫_Σ ɛ x,T-t) :δ m(x,t) d2xdt. We find that if the real fault plane is located at a distance δ h in the direction of the fault normal hat n, then to first order an additional factor of ∫ 0T∫_Σ δ h (x) ∂ n ɛ x,T-t):m(x,t) d2xdt is added to the change in the misfit function. The adjoint strain is computed by using the time-reversed difference between data and synthetics recorded at all receivers as simultaneous sources and recording the resulting strain on the fault plane. In accordance with time-reversal acoustics, all the resulting waves will constructively interfere at the position of the original source in space and time. The level of convergence will be deterimined by factors such as the source-receiver geometry, the frequency of the recorded data and synthetics, and the accuracy of the velocity structure used when back propagating the wave field. The terms ɛ x,T-t) and ∂ n ɛ x,T-t):m(x,t) can be viewed as sensitivity kernels for the moment density and the faultplane location respectively. By looking at these quantities we can make an educated choice of fault parametrization given the data in hand. The process can then be repeated to invert for the best source model, as demonstrated by Tromp et al. (2004) for the magnitude of a point force. In this presentation we explore the applicability of adjoint methods to estimating finite source parameters. Fink, M. (1997), Time reversed acoustics, Physics Today, 50(3), 34--40. Talagrand, O., and P.~Courtier (1987), Variational assimilation of meteorological observations with the adjoint vorticity equatuation. I: Theory, Q. J. R. Meteorol. Soc., 113, 1311--1328. Tromp, J., C.~Tape, and Q.~Liu (2004), Waveform tomography, adjoint methods, time reversal, and banana-doughnut kernels, Geophys. Jour. Int., in press

  3. Improved Spacecraft Materials for Radiation Protection

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Cucinotta, Francis A.; Tripathi, Ram K.; Clowdsley, M. S.; Shinn, J. L.; Singleterry, Robert C., Jr.; Thibeault, Sheila Ann; Kim, M.-H. Y.; Heinbockel, John H.; Badhwar, Gautam D.

    2001-01-01

    Methods by which radiation shielding is optimized need to be developed and materials of improved shielding characteristics identified and validated. The galactic cosmic rays (GCR) are very penetrating and the energy absorbed by the astronaut behind the shield is nearly independent of shield composition and even the shield thickness. However, the mix of particles in the transmitted beam changes rapidly with shield material composition and thickness. This results in part from the breakup of the high-energy heavy ions of the GCR which make contributions to biological effects out of proportion to their deposited energy. So the mixture of particles in the radiation field changes with shielding and the control of risk contributions from dominant particle types is critical to reducing the hazard to the astronaut. The risk of biological injury for a given particle type depends on the type of biological effect and is specific to cell or tissue type. Thus, one is faced with choosing materials which may protect a given tissue against a given effect but leave unchanged or even increase the risk of other effects in the same tissue or increase the risks to other adjacent tissues of a different type in the same individual. The optimization of shield composition will then be tied to a specific tissue and risk to that tissue. Such peculiarities arise from the complicated mixture of particles, the nature of their biological response, and the details of their interaction with material constituents. Aside from the understanding of the biological response to specific components, one also needs an accurate understanding of the radiation emerging from the shield material. This latter subject has been a principal element of this project. In the past ten years our understanding of space radiation interactions with materials has changed radically, with a large impact on shield design. For example, the NCRP estimated that only 2 g/sq cm. of aluminum would be required to meet the annual 500 mSv limit for the exposure of the blood forming organs (this limit is strictly for LEO but can be used as a guideline for the Mars mission analysis). The current estimates require aluminum shield thicknesses above 50 g/sq cm., which is impractical. In such a heavily shielded vehicle, the neutrons produced throughout the vehicle also contribute significantly to the exposure and this demands greater care in describing the angular dependence of secondary particle production processes. As such the continued testing of databases and transport procedures in laboratory and spaceflight experiments has continued. This has been the focus of much of the last year's activity and has resulted in improved neutron prediction capability. These new methods have also improved our understanding of the surface environment of Mars. The Mars 2003 NRA HEDS related surface science requirements were driven by the need to validate predictions on the upward flux of neutrons produced in the Martian regolith and bedrock made by the codes developed under this project. The codes used in the surface environment definition are also being used to look at in situ resources for the development of construction material for Martian surface facilities. For example, synthesis of polyimides and polyethylene as binders of regolith for developing basic structural elements has been studied and targets built for accelerator beam testing of radiation shielding properties. Preliminary mechanical tests have also been promising. Improved spacecraft materials have been identified (using the criteria reported by this project at the last conference) as potentially important for future shielding materials. These are liquid hydrogen, hydrogenated nanofibers, liquid methane, LiH, Polyethylene, Polysulfone, and Polyetherimide (in order of decreasing shield performance). Some of the materials are multifunctional and are required for other onboard systems. We are currently preparing software for trade studies with these materials relative to the Mars Reference Mission as required in the project's final year.

  4. Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu

    2014-01-01

    The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".

  5. Analyses of risks associated with radiation exposure from past major solar particle events

    NASA Technical Reports Server (NTRS)

    Weyland, Mark D.; Atwell, William; Cucinotta, Francis A.; Wilson, John W.; Hardy, Alva C.

    1991-01-01

    Radiation exposures and cancer induction/mortality risks were investigated for several major solar particle events (SPE's). The SPE's included are: February 1956, November 1960, August 1972, October 1989, and the September, August, and October 1989 events combined. The three 1989 events were treated as one since all three could affect a single lunar or Mars mission. A baryon transport code was used to propagate particles through aluminum and tissue shield materials. A free space environment was utilized for all calculations. Results show the 30-day blood forming organs (BFO) limit of 25 rem was surpassed by all five events using 10 g/sq cm of shielding. The BFO limit is based on a depth dose of 5 cm of tissue, while a more detailed shield distribution of the BFO's was utilized. A comparison between the 5 cm depth dose and the dose found using the BFO shield distribution shows that the 5 cm depth value slightly higher than the BFO dose. The annual limit of 50 rem was exceeded by the August 1972, October 1989, and the three combined 1989 events with 5 g/sq cm of shielding. Cancer mortality risks ranged from 1.5 to 17 percent at 1 g/sq cm and 0.5 to 1.1 percent behind 10 g/sq cm of shielding for five events. These ranges correspond to those for a 45 year old male. It is shown that secondary particles comprise about 1/3 of the total risk at 10 g/sq cm of shielding. Utilizing a computerized Space Shuttle shielding model to represent a typical spacecraft configuration in free space at the August 1972 SPE, average crew doses exceeded the BFO dose limit.

  6. Investigation of Radiation Protection Methodologies for Radiation Therapy Shielding Using Monte Carlo Simulation and Measurement

    NASA Astrophysics Data System (ADS)

    Tanny, Sean

    The advent of high-energy linear accelerators for dedicated medical use in the 1950's by Henry Kaplan and the Stanford University physics department began a revolution in radiation oncology. Today, linear accelerators are the standard of care for modern radiation therapy and can generate high-energy beams that can produce tens of Gy per minute at isocenter. This creates a need for a large amount of shielding material to properly protect members of the public and hospital staff. Standardized vault designs and guidance on shielding properties of various materials are provided by the National Council on Radiation Protection (NCRP) Report 151. However, physicists are seeking ways to minimize the footprint and volume of shielding material needed which leads to the use of non-standard vault configurations and less-studied materials, such as high-density concrete. The University of Toledo Dana Cancer Center has utilized both of these methods to minimize the cost and spatial footprint of the requisite radiation shielding. To ensure a safe work environment, computer simulations were performed to verify the attenuation properties and shielding workloads produced by a variety of situations where standard recommendations and guidance documents were insufficient. This project studies two areas of concern that are not addressed by NCRP 151, the radiation shielding workload for the vault door with a non-standard design, and the attenuation properties of high-density concrete for both photon and neutron radiation. Simulations have been performed using a Monte-Carlo code produced by the Los Alamos National Lab (LANL), Monte Carlo Neutrons, Photons 5 (MCNP5). Measurements have been performed using a shielding test port designed into the maze of the Varian Edge treatment vault.

  7. Mitigation of Engine Inlet Distortion Through Adjoint-Based Design

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram; Nielsen, Eric J.; Diskin, Boris

    2017-01-01

    The adjoint-based design capability in FUN3D is extended to allow efficient gradient- based optimization and design of concepts with highly integrated aero-propulsive systems. A circumferential distortion calculation, along with the derivatives needed to perform adjoint-based design, have been implemented in FUN3D. This newly implemented distortion calculation can be used not only for design but also to drive the existing mesh adaptation process and reduce the error associated with the fan distortion calculation. The design capability is demonstrated by the shape optimization of an in-house aircraft concept equipped with an aft fuselage propulsor. The optimization objective is the minimization of flow distortion at the aerodynamic interface plane of this aft fuselage propulsor.

  8. On the symmetry of the boundary conditions of the volume potential

    NASA Astrophysics Data System (ADS)

    Kal'menov, Tynysbek Sh.; Arepova, Gaukhar; Suragan, Durvudkhan

    2017-09-01

    It is well known that the volume potential determines the mass or the charge distributed over the domain with density f. The volume potential is extensively used in function theory and embedding theorems. It is also well known that the volume potential gives a solution to an inhomogeneous equation. And it generates a linear self-adjoint operator. It is known that self-adjoint differential operators are generated by boundary conditions. In our previous papers for an arbitrary domain a boundary condition on the volume potential is given. In the past, it was not possible to prove the self-adjointness of these obtained boundary conditions. In the present paper, we prove the symmetry of boundary condition for the volume potential.

  9. A Study of Neutron Leakage in Finite Objects

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code capable of simulating High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for simple shielded objects. Monte Carlo (MC) benchmarks were used to verify the 3DHZETRN methodology in slab and spherical geometry, and it was shown that 3DHZETRN agrees with MC codes to the degree that various MC codes agree among themselves. One limitation in the verification process is that all of the codes (3DHZETRN and three MC codes) utilize different nuclear models/databases. In the present report, the new algorithm, with well-defined convergence criteria, is used to quantify the neutron leakage from simple geometries to provide means of verifying 3D effects and to provide guidance for further code development.

  10. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  11. Shielding analyses for repetitive high energy pulsed power accelerators

    NASA Astrophysics Data System (ADS)

    Jow, H. N.; Rao, D. V.

    Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically, groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response, SNL Health Physics has undertaken an intensive effort to assess existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort and its results.

  12. Monte Carlo simulation of photon buildup factors for shielding materials in diagnostic x-ray facilities.

    PubMed

    Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim

    2012-10-01

    A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenth value layer are calculated from the broad beam transmission for these tube potentials. The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.

  13. Monte Carlo simulation of photon buildup factors for shielding materials in diagnostic x-ray facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharrati, Hedi; Agrebi, Amel; Karoui, Mohamed Karim

    2012-10-15

    Purpose: A simulation of buildup factors for ordinary concrete, steel, lead, plate glass, lead glass, and gypsum wallboard in broad beam geometry for photons energies from 10 keV to 150 keV at 5 keV intervals is presented. Methods: Monte Carlo N-particle radiation transport computer code has been used to determine the buildup factors for the studied shielding materials. Results: An example concretizing the use of the obtained buildup factors data in computing the broad beam transmission for tube potentials at 70, 100, 120, and 140 kVp is given. The half value layer, the tenth value layer, and the equilibrium tenthmore » value layer are calculated from the broad beam transmission for these tube potentials. Conclusions: The obtained values compared with those calculated from the published data show the ability of these data to predict shielding transmission curves. Therefore, the buildup factors data can be combined with primary, scatter, and leakage x-ray spectra to provide a computationally based solution to broad beam transmission for barriers in shielding x-ray facilities.« less

  14. Numerical simulation of experiments in the Giant Planet Facility

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.

    1979-01-01

    Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.

  15. Shielding from space radiations

    NASA Technical Reports Server (NTRS)

    Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.

    1993-01-01

    This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.

  16. Physics of the Isotopic Dependence of Galactic Cosmic Ray Fluence Behind Shielding

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Saganti, Premkumar B.; Hu, Xiao-Dong; Kim, Myung-Hee Y.; Cleghorn, Timothy F.; Wilson, John W.; Tripathi, Ram K.; Zeitlin, Cary J.

    2003-01-01

    For over 25 years, NASA has supported the development of space radiation transport models for shielding applications. The NASA space radiation transport model now predicts dose and dose equivalent in Earth and Mars orbit to an accuracy of plus or minus 20%. However, because larger errors may occur in particle fluence predictions, there is interest in further assessments and improvements in NASA's space radiation transport model. In this paper, we consider the effects of the isotopic composition of the primary galactic cosmic rays (GCR) and the isotopic dependence of nuclear fragmentation cross-sections on the solution to transport models used for shielding studies. Satellite measurements are used to describe the isotopic composition of the GCR. Using NASA's quantum multiple-scattering theory of nuclear fragmentation (QMSFRG) and high-charge and energy (HZETRN) transport code, we study the effect of the isotopic dependence of the primary GCR composition and secondary nuclei on shielding calculations. The QMSFRG is shown to accurately describe the iso-spin dependence of nuclear fragmentation. The principal finding of this study is that large errors (plus or minus 100%) will occur in the mass-fluence spectra when comparing transport models that use a complete isotope grid (approximately 170 ions) to ones that use a reduced isotope grid, for example the 59 ion-grid used in the HZETRN code in the past, however less significant errors (less than 20%) occur in the elemental-fluence spectra. Because a complete isotope grid is readily handled on small computer workstations and is needed for several applications studying GCR propagation and scattering, it is recommended that they be used for future GCR studies.

  17. Space Debris Surfaces (Computer Code): Probability of No Penetration Versus Impact Velocity and Obliquity

    NASA Technical Reports Server (NTRS)

    Elfer, N.; Meibaum, R.; Olsen, G.

    1995-01-01

    A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design that is best suited to the predominant penetration mechanism. The analysis also suggests the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs or Microsoft-EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII. The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs. Examples will be presented of the interaction between space vehicle geometry, the space debris environment, and the penetration and critical damage ballistic limit surfaces of the shield under consideration.

  18. Trajectory-based heating analysis for the European Space Agency/Rosetta Earth Return Vehicle

    NASA Technical Reports Server (NTRS)

    Henline, William D.; Tauber, Michael E.

    1994-01-01

    A coupled, trajectory-based flowfield and material thermal-response analysis is presented for the European Space Agency proposed Rosetta comet nucleus sample return vehicle. The probe returns to earth along a hyperbolic trajectory with an entry velocity of 16.5 km/s and requires an ablative heat shield on the forebody. Combined radiative and convective ablating flowfield analyses were performed for the significant heating portion of the shallow ballistic entry trajectory. Both quasisteady ablation and fully transient analyses were performed for a heat shield composed of carbon-phenolic ablative material. Quasisteady analysis was performed using the two-dimensional axisymmetric codes RASLE and BLIMPK. Transient computational results were obtained from the one-dimensional ablation/conduction code CMA. Results are presented for heating, temperature, and ablation rate distributions over the probe forebody for various trajectory points. Comparison of transient and quasisteady results indicates that, for the heating pulse encountered by this probe, the quasisteady approach is conservative from the standpoint of predicted surface recession.

  19. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  20. Integral nuclear data validation using experimental spent nuclear fuel compositions

    DOE PAGES

    Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco; ...

    2017-07-19

    Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less

  1. Integral nuclear data validation using experimental spent nuclear fuel compositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauld, Ian C.; Williams, Mark L.; Michel-Sendis, Franco

    Measurements of the isotopic contents of spent nuclear fuel provide experimental data that are a prerequisite for validating computer codes and nuclear data for many spent fuel applications. Under the auspices of the Organisation for Economic Co-operation and Development (OECD) Nuclear Energy Agency (NEA) and guidance of the Expert Group on Assay Data of Spent Nuclear Fuel of the NEA Working Party on Nuclear Criticality Safety, a new database of expanded spent fuel isotopic compositions has been compiled. The database, Spent Fuel Compositions (SFCOMPO) 2.0, includes measured data for more than 750 fuel samples acquired from 44 different reactors andmore » representing eight different reactor technologies. Measurements for more than 90 isotopes are included. This new database provides data essential for establishing the reliability of code systems for inventory predictions, but it also has broader potential application to nuclear data evaluation. Furthermore, the database, together with adjoint based sensitivity and uncertainty tools for transmutation systems developed to quantify the importance of nuclear data on nuclide concentrations, are described.« less

  2. Naturally induced secondary radiation in interplanetary space: Preliminary analyses for gamma radiation and radioisotope production from thermal neutron activation

    NASA Technical Reports Server (NTRS)

    Plaza-Rosado, Heriberto

    1991-01-01

    Thermal neutron activation analyses were carried out for various space systems components to determine gamma radiation dose rates and food radiation contamination levels. The space systems components selected were those for which previous radiation studies existed. These include manned space vehicle radiation shielding, liquid hydrogen propellant tanks for a Mars mission, and a food supply used as space vehicle radiation shielding. The computational method used is based on the fast neutron distribution generated by the BRYNTRN and HZETRN transport codes for Galactic Cosmic Rays (GCR) at solar minimum conditions and intense solar flares in space systems components. The gamma dose rates for soft tissue are calculated for water and aluminum space vehicle slab shields considering volumetric source self-attenuation and exponential buildup factors. In the case of the lunar habitat with regolith shielding, a completely exposed spherical habitat was assumed for mathematical convenience and conservative calculations. Activation analysis of the food supply used as radiation shielding is presented for four selected nutrients: potassium, calcium, sodium, and phosphorus. Radioactive isotopes that could represent a health hazard if ingested are identified and their concentrations are identified. For nutrients soluble in water, it was found that all induced radioactivity was below the accepted maximum permissible concentrations.

  3. Naturally induced secondary radiation in interplanetary space: Preliminary analyses for gamma radiation and radioisotope production from thermal neutron activation

    NASA Astrophysics Data System (ADS)

    Plaza-Rosado, Heriberto

    1991-09-01

    Thermal neutron activation analyses were carried out for various space systems components to determine gamma radiation dose rates and food radiation contamination levels. The space systems components selected were those for which previous radiation studies existed. These include manned space vehicle radiation shielding, liquid hydrogen propellant tanks for a Mars mission, and a food supply used as space vehicle radiation shielding. The computational method used is based on the fast neutron distribution generated by the BRYNTRN and HZETRN transport codes for Galactic Cosmic Rays (GCR) at solar minimum conditions and intense solar flares in space systems components. The gamma dose rates for soft tissue are calculated for water and aluminum space vehicle slab shields considering volumetric source self-attenuation and exponential buildup factors. In the case of the lunar habitat with regolith shielding, a completely exposed spherical habitat was assumed for mathematical convenience and conservative calculations. Activation analysis of the food supply used as radiation shielding is presented for four selected nutrients: potassium, calcium, sodium, and phosphorus. Radioactive isotopes that could represent a health hazard if ingested are identified and their concentrations are identified. For nutrients soluble in water, it was found that all induced radioactivity was below the accepted maximum permissible concentrations.

  4. Coupled Ablation, Heat Conduction, Pyrolysis, Shape Change and Spallation of the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Chen, Y.-K.; Rasky, Daniel J. (Technical Monitor)

    1995-01-01

    The Galileo probe enters the atmosphere of Jupiter in December 1995. This paper presents numerical methodology and detailed results of our final pre-impact calculations for the heat shield response. The calculations are performed using a highly modified version of a viscous shock layer code with massive radiation coupled with a surface thermochemical ablation and spallation model and with the transient in-depth thermal response of the charring and ablating heat shield. The flowfield is quasi-steady along the trajectory, but the heat shield thermal response is dynamic. Each surface node of the VSL grid is coupled with a one-dimensional thermal response calculation. The thermal solver includes heat conduction, pyrolysis, and grid movement owing to surface recession. Initial conditions for the heat shield temperature and density were obtained from the high altitude rarefied-flow calculations of Haas and Milos. Galileo probe surface temperature, shape, mass flux, and element flux are all determined as functions of time along the trajectory with spallation varied parametrically. The calculations also estimate the in-depth density and temperature profiles for the heat shield. All this information is required to determine the time-dependent vehicle mass and drag coefficient which are necessary inputs for the atmospheric reconstruction experiment on board the probe.

  5. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  6. Sensitivity analysis of a model of CO2 exchange in tundra ecosystems by the adjoint method

    NASA Technical Reports Server (NTRS)

    Waelbroek, C.; Louis, J.-F.

    1995-01-01

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO2 flux to perturbation in initial conditions, climatic inputs, and model's main parameters describing current seasonal CO2 exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO2 flux is most sensitive to parameters characterizing litter chemical composition and more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO2 exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO2-induced warming is a significant increase in CO2 emission, creating a positive feedback to atmosphreic CO2 accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO2, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO2 flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results.

  7. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  8. Linear Array Ambient Noise Adjoint Tomography Reveals Intense Crust-Mantle Interactions in North China Craton

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua

    2018-01-01

    We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.

  9. Primary and secondary particle contributions to the depth dose distribution in a phantom shielded from solar flare and Van Allen protons

    NASA Technical Reports Server (NTRS)

    Santoro, R. T.; Claiborne, H. C.; Alsmiller, R. G., Jr.

    1972-01-01

    Calculations have been made using the nucleon-meson transport code NMTC to estimate the absorbed dose and dose equivalent distributions in astronauts inside space vehicles bombarded by solar flare and Van Allen protons. A spherical shell shield of specific radius and thickness with a 30-cm-diam. tissue ball at the geometric center was used to simulate the spacecraft-astronaut configuration. The absorbed dose and the dose equivalent from primary protons, secondary protons, heavy nuclei, charged pions, muons, photons, and positrons and electrons are given as a function of depth in the tissue phantom. Results are given for solar flare protons with a characteristic rigidity of 100 MV and for Van Allen protons in a 240-nautical-mile circular orbit at 30 degree inclination angle incident on both 20-g/sq cm-thick aluminum and polyethylene spherical shell shields.

  10. Indoor Fast Neutron Generator for Biophysical and Electronic Applications

    NASA Astrophysics Data System (ADS)

    Cannuli, A.; Caccamo, M. T.; Marchese, N.; Tomarchio, E. A.; Pace, C.; Magazù, S.

    2018-05-01

    This study focuses the attention on an indoor fast neutron generator for biophysical and electronic applications. More specifically, the findings obtained by several simulations with the MCNP Monte Carlo code, necessary for the realization of a shield for indoor measurements, are presented. Furthermore, an evaluation of the neutron spectrum modification caused by the shielding is reported. Fast neutron generators are a valid and interesting available source of neutrons, increasingly employed in a wide range of research fields, such as science and engineering. The employed portable pulsed neutron source is a MP320 Thermo Scientific neutron generator, able to generate 2.5 MeV neutrons with a neutron yield of 2.0 x 106 n/s, a pulse rate of 250 Hz to 20 KHz and a duty factor varying from 5% to 100%. The neutron generator, based on Deuterium-Deuterium nuclear fusion reactions, is employed in conjunction with a solid-state photon detector, made of n-type high-purity germanium (PINS-GMX by ORTEC) and it is mainly addressed to biophysical and electronic studies. The present study showed a proposal for the realization of a shield necessary for indoor applications for MP320 neutron generator, with a particular analysis of the transport of neutrons simulated with Monte Carlo code and described the two main lines of research in which the source will be used.

  11. Assessment and Requirements of Nuclear Reaction Databases for GCR Transport in the Atmosphere and Structures

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.

    1998-01-01

    The transport properties of galactic cosmic rays (GCR) in the atmosphere, material structures, and human body (self-shielding) am of interest in risk assessment for supersonic and subsonic aircraft and for space travel in low-Earth orbit and on interplanetary missions. Nuclear reactions, such as knockout and fragmentation, present large modifications of particle type and energies of the galactic cosmic rays in penetrating materials. We make an assessment of the current nuclear reaction models and improvements in these model for developing required transport code data bases. A new fragmentation data base (QMSFRG) based on microscopic models is compared to the NUCFRG2 model and implications for shield assessment made using the HZETRN radiation transport code. For deep penetration problems, the build-up of light particles, such as nucleons, light clusters and mesons from nuclear reactions in conjunction with the absorption of the heavy ions, leads to the dominance of the charge Z = 0, 1, and 2 hadrons in the exposures at large penetration depths. Light particles are produced through nuclear or cluster knockout and in evaporation events with characteristically distinct spectra which play unique roles in the build-up of secondary radiation's in shielding. We describe models of light particle production in nucleon and heavy ion induced reactions and make an assessment of the importance of light particle multiplicity and spectral parameters in these exposures.

  12. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  13. Spectral monodromy of non-self-adjoint operators

    NASA Astrophysics Data System (ADS)

    Phan, Quang Sang

    2014-01-01

    In the present paper, we build a combinatorial invariant, called the "spectral monodromy" from the spectrum of a single (non-self-adjoint) h-pseudodifferential operator with two degrees of freedom in the semi-classical limit. Our inspiration comes from the quantum monodromy defined for the joint spectrum of an integrable system of n commuting self-adjoint h-pseudodifferential operators, given by S. Vu Ngoc ["Quantum monodromy in integrable systems," Commun. Math. Phys. 203(2), 465-479 (1999)]. The first simple case that we treat in this work is a normal operator. In this case, the discrete spectrum can be identified with the joint spectrum of an integrable quantum system. The second more complex case we propose is a small perturbation of a self-adjoint operator with a classical integrability property. We show that the discrete spectrum (in a small band around the real axis) also has a combinatorial monodromy. The main difficulty in this case is that we do not know the description of the spectrum everywhere, but only in a Cantor type set. In addition, we also show that the corresponding monodromy can be identified with the classical monodromy, defined by J. Duistermaat ["On global action-angle coordinates," Commun. Pure Appl. Math. 33(6), 687-706 (1980)].

  14. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  15. Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less

  16. Validity of scale-modeling for gamma-ray attenuation. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verser, F.A.; Donnert, H.J.

    1973-09-01

    An adjoint Monte Carlo code (GADJET) was used to calculate the exposure rate in full-scale and model structures located in the center of plane fallout fields (1 Ci/square ft of cobalt-60). Problems were run for a standard detector, an open basement, a basement with two thicknesses of covers, and a blockhouse with two thicknesses of walls. For all configurations investigated, the effects of nonscaling of the ground does not cause any problem and a procedure was developed to minimize the error introduced by non-scaling of the air. If the solid angle subtended by the roof remains unchanged, scaling of roofmore » contamination offers no problems. The lip effect can be significant in structures with the detector below grade. (GRA)« less

  17. Spacecraft Solar Particle Event (SPE) Shielding: Shielding Effectiveness as a Function of SPE Model as Determined with the FLUKA Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Koontz, S. L.; Atwell, W. A.; Reddell, B.; Rojdev, K.

    2010-12-01

    In the this paper, we report the results of modeling and simulation studies in which the radiation transport code FLUKA (FLUktuierende KAskade) is used to determine the changes in total ionizing dose (TID) and single-event effect (SEE) environments behind aluminum, polyethylene, carbon, and titanium shielding masses when the assumed form (i.e., Band or Exponential) of the solar particle event (SPE) kinetic energy spectra is changed. FLUKA simulations are fully three dimensional with an isotropic particle flux incident on a concentric spherical shell shielding mass and detector structure. FLUKA is a fully integrated and extensively verified Monte Carlo simulation package for the interaction and transport of high-energy particles and nuclei in matter. The effects are reported of both energetic primary protons penetrating the shield mass and secondary particle showers caused by energetic primary protons colliding with shielding mass nuclei. SPE heavy ion spectra are not addressed. Our results, in agreement with previous studies, show that use of the Exponential form of the event spectra can seriously underestimate spacecraft SPE TID and SEE environments in some, but not all, shielding mass cases. The SPE spectra investigated are taken from four specific SPEs that produced ground-level events (GLEs) during solar cycle 23 (1997-2008). GLEs are produced by highly energetic solar particle events (ESP), i.e., those that contain significant fluences of 700 MeV to 10 GeV protons. Highly energetic SPEs are implicated in increased rates of spacecraft anomalies and spacecraft failures. High-energy protons interact with Earth’s atmosphere via nuclear reaction to produce secondary particles, some of which are neutrons that can be detected at the Earth’s surface by the global neutron monitor network. GLEs are one part of the overall SPE resulting from a particular solar flare or coronal mass ejection event on the sun. The ESP part of the particle event, detected by spacecraft, is often associated with the arrival of a “shock front” at Earth some hours after the arrival of the GLE. The specific SPEs used in this analysis are those of: 1) November 6, 1997 - GLE only; 2) July 14-15, 2000 - GLE from the 14th plus ESP from the 15th; 3) November 4-6, 2001 - GLE and ESP from the 4th; and 4) October 28-29, 2003 - GLE and ESP from the 28th plus GLE from the 29th. The corresponding Band and Exponential spectra used in this paper are like those previously reported.

  18. Ionizing Radiation Environment on the International Space Station: Performance vs. Expectations for Avionics and Material

    NASA Technical Reports Server (NTRS)

    Koontz, Steven L.; Boeder, Paul A.; Pankop, Courtney; Reddell, Brandon

    2005-01-01

    The role of structural shielding mass in the design, verification, and in-flight performance of International Space Station (ISS), in both the natural and induced orbital ionizing radiation (IR) environments, is reported. Detailed consideration of the effects of both the natural and induced ionizing radiation environment during ISS design, development, and flight operations has produced a safe, efficient manned space platform that is largely immune to deleterious effects of the LEO ionizing radiation environment. The assumption of a small shielding mass for purposes of design and verification has been shown to be a valid worst-case approximation approach to design for reliability, though predicted dependences of single event effect (SEE) effects on latitude, longitude, SEP events, and spacecraft structural shielding mass are not observed. The Figure of Merit (FOM) method over predicts the rate for median shielding masses of about 10g/cm(exp 2) by only a factor of 3, while the Scott Effective Flux Approach (SEFA) method overestimated by about one order of magnitude as expected. The Integral Rectangular Parallelepiped (IRPP), SEFA, and FOM methods for estimating on-orbit (Single Event Upsets) SEU rates all utilize some version of the CREME-96 treatment of energetic particle interaction with structural shielding, which has been shown to underestimate the production of secondary particles in heavily shielded manned spacecraft. The need for more work directed to development of a practical understanding of secondary particle production in massive structural shielding for SEE design and verification is indicated. In contrast, total dose estimates using CAD based shielding mass distributions functions and the Shieldose Code provided a reasonable accurate estimate of accumulated dose in Grays internal to the ISS pressurized elements, albeit as a result of using worst-on-worst case assumptions (500 km altitude x 2) that compensate for ignoring both GCR and secondary particle production in massive structural shielding.

  19. SU-E-T-132: Assess the Shielding of Secondary Neutrons From Patient Collimator in Proton Therapy Considering Secondary Photons Generated in the Shielding Process with Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamanaka, M; Takashina, M; Kurosu, K

    Purpose: In this study we present Monte Carlo based evaluation of the shielding effect for secondary neutrons from patient collimator, and secondary photons emitted in the process of neutron shielding by combination of moderator and boron-10 placed around patient collimator. Methods: The PHITS Monte Carlo Simulation radiation transport code was used to simulate the proton beam (Ep = 64 to 93 MeV) from a proton therapy facility. In this study, moderators (water, polyethylene and paraffin) and boron (pure {sup 10}B) were placed around patient collimator in this order. The rate of moderator and boron thicknesses was changed fixing the totalmore » thickness at 3cm. The secondary neutron and photons doses were evaluated as the ambient dose equivalent per absorbed dose [H*(10)/D]. Results: The secondary neutrons are shielded more effectively by combination moderators and boron. The most effective combination of shielding neutrons is the polyethylene of 2.4 cm thick and the boron of 0.6 cm thick and the maximum reduction rate is 47.3 %. The H*(10)/D of secondary photons in the control case is less than that of neutrons by two orders of magnitude and the maximum increase of secondary photons is 1.0 µSv/Gy with the polyethylene of 2.8 cm thick and the boron of 0.2 cm thick. Conclusion: The combination of moderators and boron is beneficial for shielding secondary neutrons. Both the secondary photons of control and those emitted in the shielding neutrons are very lower than the secondary neutrons and photon has low RBE in comparison with neutron. Therefore the secondary photons can be ignored in the shielding neutrons.This work was supported by JSPS Core-to-Core Program (No.23003). This work was supported by JSPS Core-to-Core Program (No.23003)« less

  20. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  1. Solar proton exposure of an ICRU sphere within a complex structure Part I: Combinatorial geometry.

    PubMed

    Wilson, John W; Slaba, Tony C; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    The 3DHZETRN code, with improved neutron and light ion (Z≤2) transport procedures, was recently developed and compared to Monte Carlo (MC) simulations using simplified spherical geometries. It was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in general combinatorial geometry. A more complex shielding structure with internal parts surrounding a tissue sphere is considered and compared against MC simulations. It is shown that even in the more complex geometry, 3DHZETRN agrees well with the MC codes and maintains a high degree of computational efficiency. Published by Elsevier Ltd.

  2. Adjoint eigenfunctions of temporally recurrent single-spiral solutions in a simple model of atrial fibrillation.

    PubMed

    Marcotte, Christopher D; Grigoriev, Roman O

    2016-09-01

    This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.

  3. Adjoint Airfoil Optimization of Darrieus-Type Vertical Axis Wind Turbine

    NASA Astrophysics Data System (ADS)

    Fuchs, Roman; Nordborg, Henrik

    2012-11-01

    We present the feasibility of using an adjoint solver to optimize the torque of a Darrieus-type vertical axis wind turbine (VAWT). We start with a 2D cross section of a symmetrical airfoil and restrict us to low solidity ratios to minimize blade vortex interactions. The adjoint solver of the ANSYS FLUENT software package computes the sensitivities of airfoil surface forces based on a steady flow field. Hence, we find the torque of a full revolution using a weighted average of the sensitivities at different wind speeds and angles of attack. The weights are computed analytically, and the range of angles of attack is given by the tip speed ratio. Then the airfoil geometry is evolved, and the proposed methodology is evaluated by transient simulations.

  4. Examination of Observation Impacts derived from OSEs and Adjoint Models

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald

    2008-01-01

    With the adjoint of a data assimilation system, the impact of any or all assimilated observations on measures of forecast skill can be estimated accurately and efficiently. The approach allows aggregation of results in terms of individual data types, channels or locations, all computed simultaneously. In this study, adjoint-based estimates of observation impact are compared with results from standard observing system experiments (OSEs) in the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) GEOS-5 system. The two approaches are shown to provide unique, but complimentary, information. Used together, they reveal both redundancies and dependencies between observing system impacts as observations are added or removed. Understanding these dependencies poses a major challenge for optimizing the use of the current observational network and defining requirements for future observing systems.

  5. Adjoint eigenfunctions of temporally recurrent single-spiral solutions in a simple model of atrial fibrillation

    NASA Astrophysics Data System (ADS)

    Marcotte, Christopher D.; Grigoriev, Roman O.

    2016-09-01

    This paper introduces a numerical method for computing the spectrum of adjoint (left) eigenfunctions of spiral wave solutions to reaction-diffusion systems in arbitrary geometries. The method is illustrated by computing over a hundred eigenfunctions associated with an unstable time-periodic single-spiral solution of the Karma model on a square domain. We show that all leading adjoint eigenfunctions are exponentially localized in the vicinity of the spiral tip, although the marginal modes (response functions) demonstrate the strongest localization. We also discuss the implications of the localization for the dynamics and control of unstable spiral waves. In particular, the interaction with no-flux boundaries leads to a drift of spiral waves which can be understood with the help of the response functions.

  6. Adjoint-based constant-mass partial derivatives

    DOE PAGES

    Favorite, Jeffrey A.

    2017-09-01

    In transport theory, adjoint-based partial derivatives with respect to mass density are constant-volume derivatives. Likewise, adjoint-based partial derivatives with respect to surface locations (i.e., internal interface locations and the outer system boundary) are constant-density derivatives. This study derives the constant-mass partial derivative of a response with respect to an internal interface location or the outer system boundary and the constant-mass partial derivative of a response with respect to the mass density of a region. Numerical results are given for a multiregion two-dimensional (r-z) cylinder for three very different responses: the uncollided gamma-ray flux at an external detector point, k effmore » of the system, and the total neutron leakage. Finally, results from the derived formulas compare extremely well with direct perturbation calculations.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faillace, E.R.; Cheng, J.J.; Yu, C.

    A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less

  8. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  9. Neutron production by cosmic-ray muons in various materials

    NASA Astrophysics Data System (ADS)

    Manukovsky, K. V.; Ryazhskaya, O. G.; Sobolevsky, N. M.; Yudin, A. V.

    2016-07-01

    The results obtained by studying the background of neutrons produced by cosmic-raymuons in underground experimental facilities intended for rare-event searches and in surrounding rock are presented. The types of this rock may include granite, sedimentary rock, gypsum, and rock salt. Neutron production and transfer were simulated using the Geant4 and SHIELD transport codes. These codes were tuned via a comparison of the results of calculations with experimental data—in particular, with data of the Artemovsk research station of the Institute for Nuclear Research (INR, Moscow, Russia)—as well as via an intercomparison of results of calculations with the Geant4 and SHIELD codes. It turns out that the atomic-number dependence of the production and yield of neutrons has an irregular character and does not allow a description in terms of a universal function of the atomic number. The parameters of this dependence are different for two groups of nuclei—nuclei consisting of alpha particles and all of the remaining nuclei. Moreover, there are manifest exceptions from a power-law dependence—for example, argon. This may entail important consequences both for the existing underground experimental facilities and for those under construction. Investigation of cosmic-ray-induced neutron production in various materials is of paramount importance for the interpretation of experiments conducted at large depths under the Earth's surface.

  10. Effects of target fragmentation on evaluation of LET spectra from space radiations: implications for space radiation protection studies

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Badavi, F. F.; Badhwar, G. D.

    1996-01-01

    We present calculations of linear energy transfer (LET) spectra in low earth orbit from galactic cosmic rays and trapped protons using the HZETRN/BRYNTRN computer code. The emphasis of our calculations is on the analysis of the effects of secondary nuclei produced through target fragmentation in the spacecraft shield or detectors. Recent improvements in the HZETRN/BRYNTRN radiation transport computer code are described. Calculations show that at large values of LET (> 100 keV/micrometer) the LET spectra seen in free space and low earth orbit (LEO) are dominated by target fragments and not the primary nuclei. Although the evaluation of microdosimetric spectra is not considered here, calculations of LET spectra support that the large lineal energy (y) events are dominated by the target fragments. Finally, we discuss the situation for interplanetary exposures to galactic cosmic rays and show that current radiation transport codes predict that in the region of high LET values the LET spectra at significant shield depths (> 10 g/cm2 of Al) is greatly modified by target fragments. These results suggest that studies of track structure and biological response of space radiation should place emphasis on short tracks of medium charge fragments produced in the human body by high energy protons and neutrons.

  11. Investigating Sensitivity to Saharan Dust in Tropical Cyclone Formation Using Nasa's Adjoint Model

    NASA Technical Reports Server (NTRS)

    Holdaway, Daniel

    2015-01-01

    As tropical cyclones develop from easterly waves coming of the coast of Africa they interact with dust from the Sahara desert. There is a long standing debate over whether this dust inhibits or advances the developing storm and how much influence it has. Dust can surround the storm and absorb incoming solar radiation, cooling the air below. As a result an energy source for the system is potentially diminished, inhibiting growth of the storm. Alternatively dust may interact with clouds through micro-physical processes, for example by causing more moisture to condense, potentially increasing the strength. As a result of climate change, concentrations and amount of dust in the atmosphere will likely change. It it is important to properly understand its effect on tropical storm formation. The adjoint of an atmospheric general circulation model provides a very powerful tool for investigating sensitivity to initial conditions. The National Aeronautics and Space Administration (NASA) has recently developed an adjoint version of the Goddard Earth Observing System version 5 (GEOS-5) dynamical core, convection scheme, cloud model and radiation schemes. This is extended so that the interaction between dust and radiation is also accounted for in the adjoint model. This provides a framework for examining the sensitivity to dust in the initial conditions. Specifically the set up allows for an investigation into the extent to which dust affects cyclone strength through absorption of radiation. In this work we investigate the validity of using an adjoint model for examining sensitivity to dust in hurricane formation. We present sensitivity results for a number of systems that developed during the Atlantic hurricane season of 2006. During this period there was a significant outbreak of Saharan dust and it is has been argued that this outbreak was responsible for the relatively calm season. This period was also covered by an extensive observation campaign. It is shown that the adjoint can provide insight into the sensitivity and reveals a relatively low sensitivity to dust compared to, for example, the thermodynamic variables. However a secondary sensitivity though moisture is seen. If dust dries the air it can significantly reduce the cyclone intensity through the moisture.

  12. Investigating sensitivity to Saharan dust in tropical cyclone formation using NASA's adjoint model

    NASA Astrophysics Data System (ADS)

    Holdaway, Daniel

    2015-04-01

    As tropical cyclones develop from easterly waves coming off the coast of Africa they interact with dust from the Sahara desert. There is a long standing debate over whether this dust inhibits or advances the developing storm and how much influence it has. Dust can surround the storm and absorb incoming solar radiation, cooling the air below. As a result an energy source for the system is potentially diminished, inhibiting growth of the storm. Alternatively dust may interact with clouds through micro-physical processes, for example by causing more moisture to condense, potentially increasing the strength. As a result of climate change, concentrations and amount of dust in the atmosphere will likely change. It it is important to properly understand its effect on tropical storm formation. The adjoint of an atmospheric general circulation model provides a very powerful tool for investigating sensitivity to initial conditions. The National Aeronautics and Space Administration (NASA) has recently developed an adjoint version of the Goddard Earth Observing System version 5 (GEOS-5) dynamical core, convection scheme, cloud model and radiation schemes. This is extended so that the interaction between dust and radiation is also accounted for in the adjoint model. This provides a framework for examining the sensitivity to dust in the initial conditions. Specifically the set up allows for an investigation into the extent to which dust affects cyclone strength through absorption of radiation. In this work we investigate the validity of using an adjoint model for examining sensitivity to dust in hurricane formation. We present sensitivity results for a number of systems that developed during the Atlantic hurricane season of 2006. During this period there was a significant outbreak of Saharan dust and it is has been argued that this outbreak was responsible for the relatively calm season. This period was also covered by an extensive observation campaign. It is shown that the adjoint can provide insight into the sensitivity and reveals a relatively low sensitivity to dust compared to, for example, the thermodynamic variables. However a secondary sensitivity though moisture is seen. If dust dries the air it can significantly reduce the cyclone intensity through the moisture.

  13. Adjoint assimilation of altimetric, surface drifter, and hydrographic data in a quasi-geostrophic model of the Azores Current

    NASA Astrophysics Data System (ADS)

    Morrow, Rosemary; de Mey, Pierre

    1995-12-01

    The flow characteristics in the region of the Azores Current are investigated by assimilating TOPEX/POSEIDON and ERS 1 altimeter data into the multilevel Harvard quasigeostrophic (QG) model with open boundaries (Miller et al., 1983) using an adjoint variational scheme (Moore, 1991). The study site lies in the path of the Azores Current, where a branch retroflects to the south in the vicinity of the Madeira Rise. The region was the site of an intensive field program in 1993, SEMAPHORE. We had two main aims in this adjoint assimilation project. The first was to see whether the adjoint method could be applied locally to optimize an initial guess field, derived from the continous assimilation of altimetry data using optimal interpolation (OI). The second aim was to assimilate a variety of different data sets and evaluate their importance in constraining our QG model. The adjoint assimilation of surface data was effective in optimizing the initial conditions from OI. After 20 iterations the cost function was generally reduced by 50-80%, depending on the chosen data constraints. The primary adjustment process was via the barotropic mode. Altimetry proved to be a good constraint on the variable flow field, in particular, for constraining the barotropic field. The excellent data quality of the TOPEX/POSEIDON (T/P) altimeter data provided smooth and reliable forcing; but for our mesoscale study in a region of long decorrelation times O(30 days), the spatial coverage from the combined T/P and ERS 1 data sets was more important for constraining the solution and providing stable flow at all levels. Surface drifters provided an excellent constraint on both the barotropic and baroclinic model fields. More importantly, the drifters provided a reliable measure of the mean field. Hydrographic data were also applied as a constraint; in general, hydrography provided a weak but effective constraint on the vertical Rossby modes in the model. Finally, forecasts run over a 2-month period indicate that the initial conditions optimized by the 20-day adjoint assimilation provide more stable, longer-term forecasts.

  14. Source terms, shielding calculations and soil activation for a medical cyclotron.

    PubMed

    Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E

    2016-12-01

    Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .

  15. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  16. 29. PLAN OF THE ARVFS FIELD TEST FACILITY SHOWING BUNKER, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. PLAN OF THE ARVFS FIELD TEST FACILITY SHOWING BUNKER, CABLE CHASE, SHIELDING TANK AND FRAME ASSEMBLY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-1. INEL INDEX CODE NUMBER: 075 0701 851 151970. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  17. 32. ISOMETRIC VIEW OF PIPING PLAN, SHOWING PATH OF CONDUIT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. ISOMETRIC VIEW OF PIPING PLAN, SHOWING PATH OF CONDUIT FROM CONTROL BUNKER TO SHIELDING TANK. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-P-1. INEL INDEX CODE NUMBER: 075 0701 60 851 151977. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  18. IET. Control and equipment building (TAN620) sections. Depth and profile ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Control and equipment building (TAN-620) sections. Depth and profile of earthen shield tunnels. Ralph M. Parsons 902-4-ANP-620-A-321. Date: February 1954. INEEL index code no. 035-0620-00-693-106906 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  19. Magellan Perspective View of Ovda Regio, 15° N, 77° E

    NASA Image and Video Library

    1998-06-04

    This perspective view of Venus, generated by computer from NASA Magellan data and color-coded with emissivity, shows part of the lowlands to the north of Ovda Regio. The prominent topographic feature is a shield volcano. http://photojournal.jpl.nasa.gov/catalog/PIA00308

  20. Seismic Window Selection and Misfit Measurements for Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Lei, W.; Bozdag, E.; Lefebvre, M.; Podhorszki, N.; Smith, J. A.; Tromp, J.

    2013-12-01

    Global Adjoint Tomography requires fast parallel processing of large datasets. After obtaing the preprocessed observed and synthetic seismograms, we use the open source software packages FLEXWIN (Maggi et al. 2007) to select time windows and MEASURE_ADJ to make measurements. These measurements define adjoint sources for data assimilation. Previous versions of these tools work on a pair of SAC files---observed and synthetic seismic data for the same component and station, and loop over all seismic records associated with one earthquake. Given the large number of stations and earthquakes, the frequent read and write operations create severe I/O bottlenecks on modern computing platforms. We present new versions of these tools utilizing a new seismic data format, namely the Adaptive Seismic Data Format(ASDF). This new format shows superior scalability for applications on high-performance computers and accommodates various types of data, including earthquake, industry and seismic interferometry datasets. ASDF also provides user-friendly APIs, which can be easily integrated into the adjoint tomography workflow and combined with other data processing tools. In addition to solving the I/O bottleneck, we are making several improvements to these tools. For example, FLEXWIN is tuned to select windows for different types of earthquakes. To capture their distinct features, we categorize earthquakes by their depths and frequency bands. Moreover, instead of only picking phases between the first P arrival and the surface-wave arrivals, our aim is to select and assimilate many other later prominent phases in adjoint tomography. For example, in the body-wave band (17 s - 60 s), we include SKS, sSKS and their multiple, while in the surface-wave band (60 s - 120 s) we incorporate major-arc surface waves.

  1. An approach to computing discrete adjoints for MPI-parallelized models applied to Ice Sheet System Model 4.11

    NASA Astrophysics Data System (ADS)

    Larour, Eric; Utke, Jean; Bovin, Anton; Morlighem, Mathieu; Perez, Gilberto

    2016-11-01

    Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar, gravity, and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model (ISSM), written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written, but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of the ISSM. We present a comprehensive approach to (1) carry out type changing through the ISSM, hence facilitating operator overloading, (2) bind to external solvers such as MUMPS and GSL-LU, and (3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the northeastern Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential to enable a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or already collected, in Greenland and Antarctica.

  2. An Approach to Computing Discrete Adjoints for MPI-Parallelized Models Applied to the Ice Sheet System Model}

    NASA Astrophysics Data System (ADS)

    Perez, G. L.; Larour, E. Y.; Morlighem, M.

    2016-12-01

    Within the framework of sea-level rise projections, there is a strong need for hindcast validation of the evolution of polar ice sheets in a way that tightly matches observational records (from radar and altimetry observations mainly). However, the computational requirements for making hindcast reconstructions possible are severe and rely mainly on the evaluation of the adjoint state of transient ice-flow models. Here, we look at the computation of adjoints in the context of the NASA/JPL/UCI Ice Sheet System Model, written in C++ and designed for parallel execution with MPI. We present the adaptations required in the way the software is designed and written but also generic adaptations in the tools facilitating the adjoint computations. We concentrate on the use of operator overloading coupled with the AdjoinableMPI library to achieve the adjoint computation of ISSM. We present a comprehensive approach to 1) carry out type changing through ISSM, hence facilitating operator overloading, 2) bind to external solvers such as MUMPS and GSL-LU and 3) handle MPI-based parallelism to scale the capability. We demonstrate the success of the approach by computing sensitivities of hindcast metrics such as the misfit to observed records of surface altimetry on the North-East Greenland Ice Stream, or the misfit to observed records of surface velocities on Upernavik Glacier, Central West Greenland. We also provide metrics for the scalability of the approach, and the expected performance. This approach has the potential of enabling a new generation of hindcast-validated projections that make full use of the wealth of datasets currently being collected, or alreay collected in Greenland and Antarctica, such as surface altimetry, surface velocities, and/or gravity measurements.

  3. Simultaneous Retrievals of Aerosol Properties Using Airborne Sun Photometer, Solar Flux Radiometer, and Satellite Radiance Data

    NASA Astrophysics Data System (ADS)

    Houben, H.; Bergstrom, R. W.; Russell, P. B.; Pilewskie, P.

    2006-12-01

    Characterization of atmospheric aerosols and their climatic effects frequently requires more information than can be gathered by a single instrument. Considerable effort must be devoted to assembling a suite of complementary instruments to make the required measurements and to the production of computational tools that can fuse the data into a coherent description of the aerosols. The twin turboprop Sky Research Jetstream-31 (J-31) has participated in a number of recent field campaigns (Intex A/ICARTT, Intex B/Milagro) with goals that include column closure studies of atmospheric radiation and satellite validation. Among the instruments on board were the 14-channel NASA Ames Airborne Tracking Sunphotometer (AATS-14, which measures the transmission of the solar beam in 14 narrow spectral channels from 354 nm to 2139 nm with bandwidths between 2 and 6 nm for most channels) and the Solar Spectral Flux Radiometer (SSFR, a moderate resolution flux [irradiance] spectrometer with a hemispheric field of view which makes simultaneous zenith and nadir measurements in the wavelength range from 300 nm to 2200 nm with spectral resolution 8--12 nm). To retrieve the data we have developed a new adjointed radiative transfer model which simultaneously predicts the direct solar beam, upwelling and downwelling fluxes at the J-31 level, and satellite radiances. The code is based on an adding-doubling formulation, with an arbitrary number of streams and azimuths. The matrix form of the model allows for straightforward (though complicated) linearized and adjoint versions. We are thus able to use data assimilation techniques to determine best-fit aerosol properties above and below the J-31 (and ocean surface albedo), based on approximately 25 independent measurements from the aircraft alone. The presence of both flux and extinction data allow the ready identification of absorbing and scattering aerosols. When column closure spirals are flown, or surface or satellite data are available, a more detailed description of the aerosol and its vertical distribution can be obtained. We believe the J-31 platform and the new radiation code constitute an important facility for the validation of satellite aerosol observations.

  4. Dosimetric evaluation of internal shielding in a high dose rate skin applicator

    PubMed Central

    Granero, Domingo; Perez-Calatayud, Jose; Carmona, Vicente; Pujades, M Carmen; Ballester, Facundo

    2011-01-01

    Purpose The Valencia HDR applicators are accessories of the microSelectron HDR afterloading system (Nucletron) shaped as truncated cones. The base of the cone is either 2 or 3 cm diameter. They are intended to treat skin lesions, being the typical prescription depth 3 mm. In patients with eyelid lesions, an internal shielding is very useful to reduce the dose to the ocular globe. The purpose of this work was to evaluate the dose enhancement from potential backscatter and electron contamination due to the shielding. Material and methods Two methods were used: a) Monte Carlo simulation, performed with the GEANT4 code, 2 cm Valencia applicator was placed on the surface of a water phantom in which 2 mm lead slab was located at 3 mm depth; b) radiochromic EBT films, used to verify the Monte Carlo results, positioning the films at 1.5, 3, 5 and 7 mm depth, inside the phantom. Two irradiations, with and without the lead shielding slab, were carried out. Results The Monte Carlo results showed that due to the backscatter component from the lead, the dose level raised to about 200% with a depth range of 0.5 mm. Under the lead the dose level was enhanced to about 130% with a depth range of 1 mm. Two millimeters of lead reduce the dose under the slab with about 60%. These results agree with film measurements within uncertainties. Conclusions In conclusion, the use of 2 mm internal lead shielding in eyelid skin treatments with the Valencia applicators were evaluated using MC methods and EBT film dosimetry. The minimum bolus thickness that was needed above and below the shielding was 0.5 mm and 1 mm respectively, and the shielding reduced the absorbed dose delivered to the ocular globe by about 60%. PMID:27877198

  5. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their experiments, including the ability to model the beam line, the shielding of samples and sample holders, and the estimates of basic physical and biological outputs of the designed experiments. We present an overview of the GERM code GUI, as well as providing training applications.

  6. Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2010-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  7. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  8. Direct Linearization and Adjoint Approaches to Evaluation of Atmospheric Weighting Functions and Surface Partial Derivatives: General Principles, Synergy and Areas of Application

    NASA Technical Reports Server (NTRS)

    Ustino, Eugene A.

    2006-01-01

    This slide presentation reviews the observable radiances as functions of atmospheric parameters and of surface parameters; the mathematics of atmospheric weighting functions (WFs) and surface partial derivatives (PDs) are presented; and the equation of the forward radiative transfer (RT) problem is presented. For non-scattering atmospheres this can be done analytically, and all WFs and PDs can be computed analytically using the direct linearization approach. For scattering atmospheres, in general case, the solution of the forward RT problem can be obtained only numerically, but we need only two numerical solutions: one of the forward RT problem and one of the adjoint RT problem to compute all WFs and PDs we can think of. In this presentation we discuss applications of both the linearization and adjoint approaches

  9. Determination of the self-adjoint matrix Schrödinger operators without the bound state data

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Chuan; Yang, Chuan-Fu

    2018-06-01

    (i) For the matrix Schrödinger operator on the half line, it is shown that the scattering data, which consists of the scattering matrix and the bound state data, uniquely determines the potential and the boundary condition. It is also shown that only the scattering matrix uniquely determines the self-adjoint potential and the boundary condition if either the potential exponentially decreases fast enough or the potential is known a priori on (), where a is an any fixed positive number. (ii) For the matrix Schrödinger operator on the full line, it is shown that the left (or right) reflection coefficient uniquely determine the self-adjoint potential if either the potential exponentially decreases fast enough or the potential is known a priori on (or ()), where b is an any fixed number.

  10. Adjoint equations and analysis of complex systems: Application to virus infection modelling

    NASA Astrophysics Data System (ADS)

    Marchuk, G. I.; Shutyaev, V.; Bocharov, G.

    2005-12-01

    Recent development of applied mathematics is characterized by ever increasing attempts to apply the modelling and computational approaches across various areas of the life sciences. The need for a rigorous analysis of the complex system dynamics in immunology has been recognized since more than three decades ago. The aim of the present paper is to draw attention to the method of adjoint equations. The methodology enables to obtain information about physical processes and examine the sensitivity of complex dynamical systems. This provides a basis for a better understanding of the causal relationships between the immune system's performance and its parameters and helps to improve the experimental design in the solution of applied problems. We show how the adjoint equations can be used to explain the changes in hepatitis B virus infection dynamics between individual patients.

  11. Development of deterministic transport methods for low energy neutrons for shielding in space

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry

    1993-01-01

    Transport of low energy neutrons associated with the galactic cosmic ray cascade is analyzed in this dissertation. A benchmark quality analytical algorithm is demonstrated for use with BRYNTRN, a computer program written by the High Energy Physics Division of NASA Langley Research Center, which is used to design and analyze shielding against the radiation created by the cascade. BRYNTRN uses numerical methods to solve the integral transport equations for baryons with the straight-ahead approximation, and numerical and empirical methods to generate the interaction probabilities. The straight-ahead approximation is adequate for charged particles, but not for neutrons. As NASA Langley improves BRYNTRN to include low energy neutrons, a benchmark quality solution is needed for comparison. The neutron transport algorithm demonstrated in this dissertation uses the closed-form Green's function solution to the galactic cosmic ray cascade transport equations to generate a source of neutrons. A basis function expansion for finite heterogeneous and semi-infinite homogeneous slabs with multiple energy groups and isotropic scattering is used to generate neutron fluxes resulting from the cascade. This method, called the FN method, is used to solve the neutral particle linear Boltzmann transport equation. As a demonstration of the algorithm coded in the programs MGSLAB and MGSEMI, neutron and ion fluxes are shown for a beam of fluorine ions at 1000 MeV per nucleon incident on semi-infinite and finite aluminum slabs. Also, to demonstrate that the shielding effectiveness against the radiation from the galactic cosmic ray cascade is not directly proportional to shield thickness, a graph of transmitted total neutron scalar flux versus slab thickness is shown. A simple model based on the nuclear liquid drop assumption is used to generate cross sections for the galactic cosmic ray cascade. The ENDF/B V database is used to generate the total and scattering cross sections for neutrons in aluminum. As an external verification, the results from MGSLAB and MGSEMI were compared to ANISN/PC, a routinely used neutron transport code, showing excellent agreement. In an application to an aluminum shield, the FN method seems to generate reasonable results.

  12. First status report on regional ground-water flow modeling for the Paradox Basin, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, R.W.

    1984-05-01

    Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less

  13. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  14. Estimation of Turbulent Heat Fluxes by Assimilation of Land Surface Temperature Observations From GOES Satellites Into an Ensemble Kalman Smoother Framework

    NASA Astrophysics Data System (ADS)

    Xu, Tongren; Bateni, S. M.; Neale, C. M. U.; Auligne, T.; Liu, Shaomin

    2018-03-01

    In different studies, land surface temperature (LST) observations have been assimilated into the variational data assimilation (VDA) approaches to estimate turbulent heat fluxes. The VDA methods yield accurate turbulent heat fluxes, but they need an adjoint model, which is difficult to derive and code. They also cannot directly calculate the uncertainty of their estimates. To overcome the abovementioned drawbacks, this study assimilates LST data from Geostationary Operational Environmental Satellite into the ensemble Kalman smoother (EnKS) data assimilation system to estimate turbulent heat fluxes. EnKS does not need to derive the adjoint term and directly generates statistical information on the accuracy of its predictions. It uses the heat diffusion equation to simulate LST. EnKS with the state augmentation approach finds the optimal values for the unknown parameters (i.e., evaporative fraction and neutral bulk heat transfer coefficient, CHN) by minimizing the misfit between LST observations from Geostationary Operational Environmental Satellite and LST estimations from the heat diffusion equation. The augmented EnKS scheme is tested over six Ameriflux sites with a wide range of hydrological and vegetative conditions. The results show that EnKS can predict not only the model parameters and turbulent heat fluxes but also their uncertainties over a variety of land surface conditions. Compared to the variational method, EnKS yields suboptimal turbulent heat fluxes. However, suboptimality of EnKS is small, and its results are comparable to those of the VDA method. Overall, EnKS is a feasible and reliable method for estimation of turbulent heat fluxes.

  15. Crypto-Unitary Forms of Quantum Evolution Operators

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav

    2013-06-01

    The description of quantum evolution using unitary operator {u}(t)=exp(-i{h}t) requires that the underlying self-adjoint quantum Hamiltonian {h} remains time-independent. In a way extending the so called {PT}-symmetric quantum mechanics to the models with manifestly time-dependent "charge" {C}(t) we propose and describe an extension of such an exponential-operator approach to evolution to the manifestly time-dependent self-adjoint quantum Hamiltonians {h}(t).

  16. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  17. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  18. On the spin- 1/2 Aharonov–Bohm problem in conical space: Bound states, scattering and helicity nonconservation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade, F.M., E-mail: fmandrade@uepg.br; Silva, E.O., E-mail: edilbertoo@gmail.com; Pereira, M., E-mail: marciano@uepg.br

    2013-12-15

    In this work the bound state and scattering problems for a spin- 1/2 particle undergone to an Aharonov–Bohm potential in a conical space in the nonrelativistic limit are considered. The presence of a δ-function singularity, which comes from the Zeeman spin interaction with the magnetic flux tube, is addressed by the self-adjoint extension method. One of the advantages of the present approach is the determination of the self-adjoint extension parameter in terms of physics of the problem. Expressions for the energy bound states, phase-shift and S matrix are determined in terms of the self-adjoint extension parameter, which is explicitly determinedmore » in terms of the parameters of the problem. The relation between the bound state and zero modes and the failure of helicity conservation in the scattering problem and its relation with the gyromagnetic ratio g are discussed. Also, as an application, we consider the spin- 1/2 Aharonov–Bohm problem in conical space plus a two-dimensional isotropic harmonic oscillator. -- Highlights: •Planar dynamics of a spin- 1/2 neutral particle. •Bound state for Aharonov–Bohm systems. •Aharonov–Bohm scattering. •Helicity nonconservation. •Determination of the self-adjoint extension parameter.« less

  19. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  20. Test report dot 7A type a liquid packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ketusky, E. T.; Brandjes, C.; Benoit, T. J.

    This test report documents the performance of Savannah River National Laboratory’s (SRNL’s) U.S. Department of Transportation (DOT) Specification 7A; General Packaging, Type A shielded liquid shipping packaging and compliance with the regulatory requirements of Title 49 of the Code of Federal Regulations (CFR). The primary use of this packaging design is for the transport of radioactive liquids of up to 1.3 liters in an unshielded configuration and up to 113 mL of radioactive liquids in a shielded configuration, with no more than an A2 quantity in either configuration, over public highways and/or commercial aircraft. The contents are liquid radioactive materialsmore » sufficiently shielded and within the activity limits specified in173.435 or 173.433 for A2 (normal form) materials, as well as within the analyzed thermal heat limits. Any contents must be compatibly packaged and must be compatible with the packaging. The basic packaging design is based on the U.S. Department of Energy’s (DOE’s) Model 9979 Type A fissile shipping packaging designed and tested by SRNL. The shielded liquid configuration consists of the outer and inner drums of the 9979 package with additional low density polyethylene (LDPE) dunnage nesting a tungsten shielded cask assembly (WSCA) within the 30-gallon inner drum. The packaging model for the DOT Specification 7A, Type A liquids packaging is HVYTAL.« less

Top