Sample records for multi-group diffusion code

  1. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Holst, B.; Toth, G.; Sokolov, I. V.

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less

  2. Advanced Multi-Physics (AMP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby

    2012-06-01

    The Advanced Multi-Physics (AMP) code, in its present form, will allow a user to build a multi-physics application code for existing mechanics and diffusion operators and extend them with user-defined material models and new physics operators. There are examples that demonstrate mechanics, thermo-mechanics, coupled diffusion, and mechanical contact. The AMP code is designed to leverage a variety of mathematical solvers (PETSc, Trilinos, SUNDIALS, and AMP solvers) and mesh databases (LibMesh and AMP) in a consistent interchangeable approach.

  3. Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825

    NASA Astrophysics Data System (ADS)

    Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.

    2010-11-01

    We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.

  4. Reactor Application for Coaching Newbies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-06-17

    RACCOON is a Moose based reactor physics application designed to engage undergraduate and first-year graduate students. The code contains capabilities to solve the multi group Neutron Diffusion equation in eigenvalue and fixed source form and will soon have a provision to provide simple thermal feedback. These capabilities are sufficient to solve example problems found in Duderstadt & Hamilton (the typical textbook of senior level reactor physics classes). RACCOON does not contain any advanced capabilities as found in YAK.

  5. Impact of multi-component diffusion in turbulent combustion using direct numerical simulations

    DOE PAGES

    Bruno, Claudio; Sankaran, Vaidyanathan; Kolla, Hemanth; ...

    2015-08-28

    This study presents the results of DNS of a partially premixed turbulent syngas/air flame at atmospheric pressure. The objective was to assess the importance and possible effects of molecular transport on flame behavior and structure. To this purpose DNS were performed at with two proprietary DNS codes and with three different molecular diffusion transport models: fully multi-component, mixture averaged, and imposing the Lewis number of all species to be unity.

  6. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexander Pigarov

    2012-06-05

    This is the final report for the Research Grant DE-FG02-08ER54989 'Edge Plasma Simulations in NSTX and CTF: Synergy of Lithium Coating, Non-Diffusive Anomalous Transport and Drifts'. The UCSD group including: A.Yu. Pigarov (PI), S.I. Krasheninnikov and R.D. Smirnov, was working on modeling of the impact of lithium coatings on edge plasma parameters in NSTX with the multi-species multi-fluid code UEDGE. The work was conducted in the following main areas: (i) improvements of UEDGE model for plasma-lithium interactions, (ii) understanding the physics of low-recycling divertor regime in NSTX caused by lithium pumping, (iii) study of synergistic effects with lithium coatings andmore » non-diffusive ballooning-like cross-field transport, (iv) simulation of experimental multi-diagnostic data on edge plasma with lithium pumping in NSTX via self-consistent modeling of D-Li-C plasma with UEDGE, and (v) working-gas balance analysis. The accomplishments in these areas are given in the corresponding subsections in Section 2. Publications and presentations made under the Grant are listed in Section 3.« less

  7. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  8. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering.

    PubMed

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-10-24

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.

  9. Geometric phase coded metasurface: from polarization dependent directive electromagnetic wave scattering to diffusion-like scattering

    PubMed Central

    Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian

    2016-01-01

    Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064

  10. The Impact of Cognitive Training on Cerebral White Matter in Community-Dwelling Elderly: One-Year Prospective Longitudinal Diffusion Tensor Imaging Study.

    PubMed

    Cao, Xinyi; Yao, Ye; Li, Ting; Cheng, Yan; Feng, Wei; Shen, Yuan; Li, Qingwei; Jiang, Lijuan; Wu, Wenyuan; Wang, Jijun; Sheng, Jianhua; Feng, Jianfeng; Li, Chunbo

    2016-09-15

    It has been shown that cognitive training (CogTr) is effective and recuperative for older adults, and can be used to fight against cognitive decline. In this study, we investigated whether behavioural gains from CogTr would extend to white matter (WM) microstructure, and whether training-induced changes in WM integrity would be associated with improvements in cognitive function, using diffusion tensor imaging (DTI). 48 healthy community elderly were either assigned to multi-domain or single-domain CogTr groups to receive 24 sessions over 12 weeks, or to a control group. DTI was performed at both baseline and 12-month follow-up. Positive effects of multi-domain CogTr on long-term changes in DTI indices were found in posterior parietal WM. Participants in the multi-domain group showed a trend of long-term decrease in axial diffusivity (AD) without significant change in fractional anisotropy (FA), mean diffusivity (MD) or radial diffusivity (RD), while those in the control group displayed a significant FA decrease, and an increase in MD and RD. In addition, significant relationships between an improvement in processing speed and changes in RD, MD and AD were found in the multi-domain group. These findings support the hypothesis that plasticity of WM can be modified by CogTr, even in late adulthood.

  11. Validation of DRAGON4/DONJON4 simulation methodology for a typical MNSR by calculating reactivity feedback coefficient and neutron flux

    NASA Astrophysics Data System (ADS)

    Al Zain, Jamal; El Hajjaji, O.; El Bardouni, T.; Boukhal, H.; Jaï, Otman

    2018-06-01

    The MNSR is a pool type research reactor, which is difficult to model because of the importance of neutron leakage. The aim of this study is to evaluate a 2-D transport model for the reactor compatible with the latest release of the DRAGON code and 3-D diffusion of the DONJON code. DRAGON code is then used to generate the group macroscopic cross sections needed for full core diffusion calculations. The diffusion DONJON code, is then used to compute the effective multiplication factor (keff), the feedback reactivity coefficients and neutron flux which account for variation in fuel and moderator temperatures as well as the void coefficient have been calculated using the DRAGON and DONJON codes for the MNSR research reactor. The cross sections of all the reactor components at different temperatures were generated using the DRAGON code. These group constants were used then in the DONJON code to calculate the multiplication factor and the neutron spectrum at different water and fuel temperatures using 69 energy groups. Only one parameter was changed where all other parameters were kept constant. Finally, Good agreements between the calculated and measured have been obtained for every of the feedback reactivity coefficients and neutron flux.

  12. Surface tension models for a multi-material ALE code with AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wangyi; Koniges, Alice; Gott, Kevin

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  13. Surface tension models for a multi-material ALE code with AMR

    DOE PAGES

    Liu, Wangyi; Koniges, Alice; Gott, Kevin; ...

    2017-06-01

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  14. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  15. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N., E-mail: zizin@adis.vver.kiae.ru

    2010-12-15

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit ofmore » the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.« less

  16. Reactivity effects in VVER-1000 of the third unit of the kalinin nuclear power plant at physical start-up. Computations in ShIPR intellectual code system with library of two-group cross sections generated by UNK code

    NASA Astrophysics Data System (ADS)

    Zizin, M. N.; Zimin, V. G.; Zizina, S. N.; Kryakvin, L. V.; Pitilimov, V. A.; Tereshonok, V. A.

    2010-12-01

    The ShIPR intellectual code system for mathematical simulation of nuclear reactors includes a set of computing modules implementing the preparation of macro cross sections on the basis of the two-group library of neutron-physics cross sections obtained for the SKETCH-N nodal code. This library is created by using the UNK code for 3D diffusion computation of first VVER-1000 fuel loadings. Computation of neutron fields in the ShIPR system is performed using the DP3 code in the two-group diffusion approximation in 3D triangular geometry. The efficiency of all groups of control rods for the first fuel loading of the third unit of the Kalinin Nuclear Power Plant is computed. The temperature, barometric, and density effects of reactivity as well as the reactivity coefficient due to the concentration of boric acid in the reactor were computed additionally. Results of computations are compared with the experiment.

  17. Influence of Natural Convection and Thermal Radiation Multi-Component Transport in MOCVD Reactors

    NASA Technical Reports Server (NTRS)

    Lowry, S.; Krishnan, A.; Clark, I.

    1999-01-01

    The influence of Grashof and Reynolds number in Metal Organic Chemical Vapor (MOCVD) reactors is being investigated under a combined empirical/numerical study. As part of that research, the deposition of Indium Phosphide in an MOCVD reactor is modeled using the computational code CFD-ACE. The model includes the effects of convection, conduction, and radiation as well as multi-component diffusion and multi-step surface/gas phase chemistry. The results of the prediction are compared with experimental data for a commercial reactor and analyzed with respect to the model accuracy.

  18. Optimization of small long-life PWR based on thorium fuel

    NASA Astrophysics Data System (ADS)

    Subkhi, Moh Nurul; Suud, Zaki; Waris, Abdul; Permana, Sidik

    2015-09-01

    A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% 233U & 2.8% 231Pa, 6% 233U & 2.8% 231Pa and 7% 233U & 6% 231Pa give low excess reactivity.

  19. Methodes d'optimisation des parametres 2D du reflecteur dans un reacteur a eau pressurisee

    NASA Astrophysics Data System (ADS)

    Clerc, Thomas

    With a third of the reactors in activity, the Pressurized Water Reactor (PWR) is today the most used reactor design in the world. This technology equips all the 19 EDF power plants. PWRs fit into the category of thermal reactors, because it is mainly the thermal neutrons that contribute to the fission reaction. The pressurized light water is both used as the moderator of the reaction and as the coolant. The active part of the core is composed of uranium, slightly enriched in uranium 235. The reflector is a region surrounding the active core, and containing mostly water and stainless steel. The purpose of the reflector is to protect the vessel from radiations, and also to slow down the neutrons and reflect them into the core. Given that the neutrons participate to the reaction of fission, the study of their behavior within the core is capital to understand the general functioning of how the reactor works. The neutrons behavior is ruled by the transport equation, which is very complex to solve numerically, and requires very long calculation. This is the reason why the core codes that will be used in this study solve simplified equations to approach the neutrons behavior in the core, in an acceptable calculation time. In particular, we will focus our study on the diffusion equation and approximated transport equations, such as SPN or S N equations. The physical properties of the reflector are radically different from those of the fissile core, and this structural change causes important tilt in the neutron flux at the core/reflector interface. This is why it is very important to accurately design the reflector, in order to precisely recover the neutrons behavior over the whole core. Existing reflector calculation techniques are based on the Lefebvre-Lebigot method. This method is only valid if the energy continuum of the neutrons is discretized in two energy groups, and if the diffusion equation is used. The method leads to the calculation of a homogeneous reflector. The aim of this study is to create a computational scheme able to compute the parameters of heterogeneous, multi-group reflectors, with both diffusion and SPN/SN operators. For this purpose, two computational schemes are designed to perform such a reflector calculation. The strategy used in both schemes is to minimize the discrepancies between a power distribution computed with a core code and a reference distribution, which will be obtained with an APOLLO2 calculation based on the method Method Of Characteristics (MOC). In both computational schemes, the optimization parameters, also called control variables, are the diffusion coefficients in each zone of the reflector, for diffusion calculations, and the P-1 corrected macroscopic total cross-sections in each zone of the reflector, for SPN/SN calculations (or correction factors on these parameters). After a first validation of our computational schemes, the results are computed, always by optimizing the fast diffusion coefficient for each zone of the reflector. All the tools of the data assimilation have been used to reflect the different behavior of the solvers in the different parts of the core. Moreover, the reflector is refined in six separated zones, corresponding to the physical structure of the reflector. There will be then six control variables for the optimization algorithms. [special characters omitted]. Our computational schemes are then able to compute heterogeneous, 2-group or multi-group reflectors, using diffusion or SPN/SN operators. The optimization performed reduces the discrepancies distribution between the power computed with the core codes and the reference power. However, there are two main limitations to this study: first the homogeneous modeling of the reflector assemblies doesn't allow to properly describe its physical structure near the core/reflector interface. Moreover, the fissile assemblies are modeled in infinite medium, and this model reaches its limit at the core/reflector interface. These two problems should be tackled in future studies. (Abstract shortened by UMI.).

  20. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  1. Optimization of small long-life PWR based on thorium fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subkhi, Moh Nurul, E-mail: nsubkhi@students.itb.ac.id; Physics Dept., Faculty of Science and Technology, State Islamic University of Sunan Gunung Djati Bandung Jalan A.H Nasution 105 Bandung; Suud, Zaki, E-mail: szaki@fi.itb.ac.id

    2015-09-30

    A conceptual design of small long-life Pressurized Water Reactor (PWR) using thorium fuel has been investigated in neutronic aspect. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.2, while the multi-energy-group diffusion calculations were optimized in three-dimension X-Y-Z geometry of core by COREBN. The excess reactivity of thorium nitride with ZIRLO cladding is considered during 5 years of burnup without refueling. Optimization of 350 MWe long life PWR based on 5% {sup 233}U & 2.8% {sup 231}Pa, 6% {sup 233}U & 2.8% {sup 231}Pa and 7% {sup 233}U & 6% {supmore » 231}Pa give low excess reactivity.« less

  2. Improved Cook-off Modeling of Multi-component Cast Explosives

    NASA Astrophysics Data System (ADS)

    Nichols, Albert

    2017-06-01

    In order to understand the hazards associated with energetic materials, it is important to understand their behavior in adverse thermal environments. These processes have been relatively well understood for solid explosives, however, the same cannot be said for multi-component melt-cast explosives. Here we describe the continued development of ALE3D, a coupled thermal/chemical/mechanical code, to improve its description of fluid explosives. The improved physics models include: 1) Chemical potential driven species segregation. This model allows us to model the complex flow fields associated with the melting and decomposing Comp-B, where the denser RDX tends to settle and the decomposing gasses rise, 2) Automatically scaled stream-wise diffusion model for thermal, species, and momentum diffusion. These models add sufficient numerical diffusion in the direction of flow to maintain numerical stability when the system is under resolved, as occurs for large systems. And 3) a slurry viscosity model, required to properly define the flow characteristics of the multi-component fluidized system. These models will be demonstrated on a simple Comp-B system. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.

  3. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  4. Conceptual design study of small long-life PWR based on thorium cycle fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subkhi, M. Nurul; Su'ud, Zaki; Waris, Abdul

    2014-09-30

    A neutronic performance of small long-life Pressurized Water Reactor (PWR) using thorium cycle based fuel has been investigated. Thorium cycle which has higher conversion ratio in thermal region compared to uranium cycle produce some significant of {sup 233}U during burn up time. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.3, while the multi-energy-group diffusion calculations were optimized in whole core cylindrical two-dimension R-Z geometry by SRAC-CITATION. this study would be introduced thorium nitride fuel system which ZIRLO is the cladding material. The optimization of 350 MWt small long life PWRmore » result small excess reactivity and reduced power peaking during its operation.« less

  5. Implementation of 5-layer thermal diffusion scheme in weather research and forecasting model with Intel Many Integrated Cores

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    For weather forecasting and research, the Weather Research and Forecasting (WRF) model has been developed, consisting of several components such as dynamic solvers and physical simulation modules. WRF includes several Land- Surface Models (LSMs). The LSMs use atmospheric information, the radiative and precipitation forcing from the surface layer scheme, the radiation scheme, and the microphysics/convective scheme all together with the land's state variables and land-surface properties, to provide heat and moisture fluxes over land and sea-ice points. The WRF 5-layer thermal diffusion simulation is an LSM based on the MM5 5-layer soil temperature model with an energy budget that includes radiation, sensible, and latent heat flux. The WRF LSMs are very suitable for massively parallel computation as there are no interactions among horizontal grid points. The features, efficient parallelization and vectorization essentials, of Intel Many Integrated Core (MIC) architecture allow us to optimize this WRF 5-layer thermal diffusion scheme. In this work, we present the results of the computing performance on this scheme with Intel MIC architecture. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.1x. Accordingly, the same CPU-based optimizations improved the performance on Intel Xeon E5- 2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  6. Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures

    DOE PAGES

    Fung, J.; Aulwes, R. T.; Bement, M. T.; ...

    2015-07-14

    This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less

  7. Zebra: An advanced PWR lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, L.; Wu, H.; Zheng, Y.

    2012-07-01

    This paper presents an overview of an advanced PWR lattice code ZEBRA developed at NECP laboratory in Xi'an Jiaotong Univ.. The multi-group cross-section library is generated from the ENDF/B-VII library by NJOY and the 361-group SHEM structure is employed. The resonance calculation module is developed based on sub-group method. The transport solver is Auto-MOC code, which is a self-developed code based on the Method of Characteristic and the customization of AutoCAD software. The whole code is well organized in a modular software structure. Some numerical results during the validation of the code demonstrate that this code has a good precisionmore » and a high efficiency. (authors)« less

  8. The numerical methods for the development of the mixture region in the vapor explosion simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y.; Ohashi, H.; Akiyama, M.

    An attempt to numerically simulate the process of the vapor explosion with a general multi-component and multi-dimension code is being challenged. Because of the rapid change of the flow field and extremely nonuniform distribution of the components in the system of the vapor explosion, the numerical divergence and diffusion are subject to occur easily. A dispersed component model and a multiregion scheme, by which these difficulties can be effectively overcome, were proposed. The simulations have been performed for the processes of the premixing and the fragmentation propagation in the vapor explosion.

  9. GRIZZLY Model of Multi-Reactive Species Diffusion, Moisture/Heat Transfer and Alkali-Silica Reaction for Simulating Concrete Aging and Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hai; Spencer, Benjamin W.; Cai, Guowei

    Concrete is widely used in the construction of nuclear facilities because of its structural strength and its ability to shield radiation. The use of concrete in nuclear power plants for containment and shielding of radiation and radioactive materials has made its performance crucial for the safe operation of the facility. As such, when life extension is considered for nuclear power plants, it is critical to have accurate and reliable predictive tools to address concerns related to various aging processes of concrete structures and the capacity of structures subjected to age-related degradation. The goal of this report is to document themore » progress of the development and implementation of a fully coupled thermo-hydro-mechanical-chemical model in GRIZZLY code with the ultimate goal to reliably simulate and predict long-term performance and response of aged NPP concrete structures subjected to a number of aging mechanisms including external chemical attacks and volume-changing chemical reactions within concrete structures induced by alkali-silica reactions and long-term exposure to irradiation. Based on a number of survey reports of concrete aging mechanisms relevant to nuclear power plants and recommendations from researchers in concrete community, we’ve implemented three modules during FY15 in GRIZZLY code, (1) multi-species reactive diffusion model within cement materials; (2) coupled moisture and heat transfer model in concrete; and (3) anisotropic, stress-dependent, alkali-silica reaction induced swelling model. The multi-species reactive diffusion model was implemented with the objective to model aging of concrete structures subjected to aggressive external chemical attacks (e.g., chloride attack, sulfate attack, etc.). It considers multiple processes relevant to external chemical attacks such as diffusion of ions in aqueous phase within pore spaces, equilibrium chemical speciation reactions and kinetic mineral dissolution/precipitation. The moisture/heat transfer module was implemented to simulate long-term spatial and temporal evolutions of the moisture and temperature fields within concrete structures at both room and elevated temperatures. The ASR swelling model implemented in GRIZZLY code can simulate anisotropic expansions of ASR gel under either uniaxial, biaxial and triaxial stress states, and can be run simultaneously with the moisture/heat transfer model and coupled with various elastic/inelastic solid mechanics models that were implemented in GRIZZLY code previously. This report provides detailed descriptions of the governing equations, constitutive equations and numerical algorithms of the three modules implemented in GRIZZLY during FY15, simulation results of example problems and model validation results by comparing simulations with available experimental data reported in the literature. The close match between the experiments and simulations clearly demonstrate the potential of GRIZZLY code for reliable evaluation and prediction of long-term performance and response of aged concrete structures in nuclear power plants.« less

  10. Modeling radiation belt dynamics using a 3-D layer method code

    NASA Astrophysics Data System (ADS)

    Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.

    2017-08-01

    A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.

  11. Multi-Component Diffusion with Application To Computational Aerothermodynamics

    NASA Technical Reports Server (NTRS)

    Sutton, Kenneth; Gnoffo, Peter A.

    1998-01-01

    The accuracy and complexity of solving multicomponent gaseous diffusion using the detailed multicomponent equations, the Stefan-Maxwell equations, and two commonly used approximate equations have been examined in a two part study. Part I examined the equations in a basic study with specified inputs in which the results are applicable for many applications. Part II addressed the application of the equations in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) computational code for high-speed entries in Earth's atmosphere. The results showed that the presented iterative scheme for solving the Stefan-Maxwell equations is an accurate and effective method as compared with solutions of the detailed equations. In general, good accuracy with the approximate equations cannot be guaranteed for a species or all species in a multi-component mixture. 'Corrected' forms of the approximate equations that ensured the diffusion mass fluxes sum to zero, as required, were more accurate than the uncorrected forms. Good accuracy, as compared with the Stefan- Maxwell results, were obtained with the 'corrected' approximate equations in defining the heating rates for the three Earth entries considered in Part II.

  12. BOXER: Fine-flux Cross Section Condensation, 2D Few Group Diffusion and Transport Burnup Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2010-02-01

    Neutron transport, calculation of multiplication factor and neutron fluxes in 2-D configurations: cell calculations, 2-D diffusion and transport, and burnup. Preparation of a cross section library for the code BOXER from a basic library in ENDF/B format (ETOBOX).

  13. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  14. Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki

    2014-08-01

    High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.

  15. Turtle 24.0 diffusion depletion code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is corecontained. Provision is made to save data on tape for futuremore » reference. ( auth)« less

  16. Comments on the Diffusive Behavior of Two Upwind Schemes

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    1998-01-01

    The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and locally one-dimensional finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2.5 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a speedup of 29 over finite volume.

  17. Diffusion Characteristics of Upwind Schemes on Unstructured Triangulations

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    1998-01-01

    The diffusive characteristics of two upwind schemes, multi-dimensional fluctuation splitting and dimensionally-split finite volume, are compared for scalar advection-diffusion problems. Algorithms for the two schemes are developed for node-based data representation on median-dual meshes associated with unstructured triangulations in two spatial dimensions. Four model equations are considered: linear advection, non-linear advection, diffusion, and advection-diffusion. Modular coding is employed to isolate the effects of the two approaches for upwind flux evaluation, allowing for head-to-head accuracy and efficiency comparisons. Both the stability of compressive limiters and the amount of artificial diffusion generated by the schemes is found to be grid-orientation dependent, with the fluctuation splitting scheme producing less artificial diffusion than the dimensionally-split finite volume scheme. Convergence rates are compared for the combined advection-diffusion problem, with a speedup of 2-3 seen for fluctuation splitting versus finite volume when solved on the same mesh. However, accurate solutions to problems with small diffusion coefficients can be achieved on coarser meshes using fluctuation splitting rather than finite volume, so that when comparing convergence rates to reach a given accuracy, fluctuation splitting shows a 20-25 speedup over finite volume.

  18. An Information Theory-Inspired Strategy for Design of Re-programmable Encrypted Graphene-based Coding Metasurfaces at Terahertz Frequencies.

    PubMed

    Momeni, Ali; Rouhi, Kasra; Rajabalipanah, Hamid; Abdolali, Ali

    2018-04-18

    Inspired by the information theory, a new concept of re-programmable encrypted graphene-based coding metasurfaces was investigated at terahertz frequencies. A channel-coding function was proposed to convolutionally record an arbitrary information message onto unrecognizable but recoverable parity beams generated by a phase-encrypted coding metasurface. A single graphene-based reflective cell with dual-mode biasing voltages was designed to act as "0" and "1" meta-atoms, providing broadband opposite reflection phases. By exploiting graphene tunability, the proposed scheme enabled an unprecedented degree of freedom in the real-time mapping of information messages onto multiple parity beams which could not be damaged, altered, and reverse-engineered. Various encryption types such as mirroring, anomalous reflection, multi-beam generation, and scattering diffusion can be dynamically attained via our multifunctional metasurface. Besides, contrary to conventional time-consuming and optimization-based methods, this paper convincingly offers a fast, straightforward, and efficient design of diffusion metasurfaces of arbitrarily large size. Rigorous full-wave simulations corroborated the results where the phase-encrypted metasurfaces exhibited a polarization-insensitive reflectivity less than -10 dB over a broadband frequency range from 1 THz to 1.7 THz. This work reveals new opportunities for the extension of re-programmable THz-coding metasurfaces and may be of interest for reflection-type security systems, computational imaging, and camouflage technology.

  19. Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains

    NASA Astrophysics Data System (ADS)

    Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.

    2004-07-01

    This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.

  20. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  1. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.

    2015-07-28

    A method of simulating X-ray diffuse scattering from multi-model PDB files is presented. Despite similar agreement with Bragg data, different translation–libration–screw refinement strategies produce unique diffuse intensity patterns. Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling andmore » validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls-as-xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less

  2. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  3. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  4. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    PubMed

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  5. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787

  6. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.

  7. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  8. Multi-modal neuroimaging in premanifest and early Huntington's disease: 18 month longitudinal data from the IMAGE-HD study.

    PubMed

    Domínguez D, Juan F; Egan, Gary F; Gray, Marcus A; Poudel, Govinda R; Churchyard, Andrew; Chua, Phyllis; Stout, Julie C; Georgiou-Karistianis, Nellie

    2013-01-01

    IMAGE-HD is an Australian based multi-modal longitudinal magnetic resonance imaging (MRI) study in premanifest and early symptomatic Huntington's disease (pre-HD and symp-HD, respectively). In this investigation we sought to determine the sensitivity of imaging methods to detect macrostructural (volume) and microstructural (diffusivity) longitudinal change in HD. We used a 3T MRI scanner to acquire T1 and diffusion weighted images at baseline and 18 months in 31 pre-HD, 31 symp-HD and 29 controls. Volume was measured across the whole brain, and volume and diffusion measures were ascertained for caudate and putamen. We observed a range of significant volumetric and, for the first time, diffusion changes over 18 months in both pre-HD and symp-HD, relative to controls, detectable at the brain-wide level (volume change in grey and white matter) and in caudate and putamen (volume and diffusivity change). Importantly, longitudinal volume change in the caudate was the only measure that discriminated between groups across all stages of disease: far from diagnosis (>15 years), close to diagnosis (<15 years) and after diagnosis. Of the two diffusion metrics (mean diffusivity, MD; fractional anisotropy, FA), only longitudinal FA change was sensitive to group differences, but only after diagnosis. These findings further confirm caudate atrophy as one of the most sensitive and early biomarkers of neurodegeneration in HD. They also highlight that different tissue properties have varying schedules in their ability to discriminate between groups along disease progression and may therefore inform biomarker selection for future therapeutic interventions.

  9. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  10. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  11. The Effect of Al2O3 Addition on the Thermal Diffusivity of Heat Activated Acrylic Resin.

    PubMed

    Atla, Jyothi; Manne, Prakash; Gopinadh, A; Sampath, Anche; Muvva, Suresh Babu; Kishore, Krishna; Sandeep, Chiramana; Chittamsetty, Harika

    2013-08-01

    This study aimed at investigating the effect of adding 5% to 20% by weight aluminium oxide powder (Al2O3) on thermal diffusivity of heat-polymerized acrylic resin. Twenty five cylindrical test specimens with an embedded thermocouple were used to determine thermal diffusivity over a physiologic temperature range (0 to 70°C). The specimens were divided into five groups (5 specimens/group) which were coded A to E. Group A was the control group (unmodified acrylic resin specimens). The specimens of the remaining four groups were reinforced with 5%, 10%, 15%, and 20% Al2O3 by weight. RESULTS were analysed by using one-way analysis of variance (ANOVA). Test specimens which belonged to Group E showed the highest mean thermal diffusivity value of 10.7mm(2)/sec, followed by D (9.09mm(2)/sec), C (8.49mm(2)/sec), B(8.28mm(2)/sec) and A(6.48mm(2)/sec) groups respectively. Thermal diffusivities of the reinforced acrylic resins were found to be significantly higher than that of the unmodified acrylic resin. Thermal diffusivity was found to increase in proportion to the weight percentage of alumina filler. Al2O3 fillers have potential to provide increased thermal diffusivity. Increasing the heat transfer characteristics of the acrylic resin base material could lead to more patient satisfaction.

  12. Exploring the limits of the ``SNB'' multi-group diffusion nonlocal model

    NASA Astrophysics Data System (ADS)

    Brodrick, Jonathan; Ridgers, Christopher; Kingham, Robert

    2014-10-01

    A correct treatment of nonlocal transport in the presence of steep temperature gradients found in laser and inertial fusion plasmas has long been highly desirable over the use of an ad-hoc flux limiter. Therefore, an implementation of the ``SNB'' nonlocal model (G P Schurtz, P D Nicolaï & M Busquet, Phys. Plas. 7, 4238 (2000)) has been benchmarked against a fully-implicit kinetic code: IMPACT. A variety of scenarios, including relaxation of temperature sinusoids and Gaussians in addition to continuous laser heating have been investigated. Results highlight the effect of neglecting electron inertia (∂f1/∂ t) as well as question the feasibility of a nonlocal model that does not continuously track the evolution of the distribution function. Deviations from the Spitzer electric fields used in the model across steep gradients are also investigated. Regimes of validity for such a model are identified and discussed, and possible improvements to the model are suggested.

  13. Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport

    NASA Astrophysics Data System (ADS)

    Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl

    We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.

  14. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Lawrence; Howard, Mike; Levesque, George; Souers, Clark

    2011-06-01

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. We use the thermo-chemical code CHEETAH linked to ALE hydrodynamics codes to model detonations. We term our model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonation wave and calculate EOS values based on the concentrations. A validation suite of model simulations compared to recent high fidelity metal push experiments at ambient and cold temperatures has been developed. We present here a study of multi-time scale kinetic rate effects for these experiments. Prepared by LLNL under Contract DE-AC52-07NA27344.

  15. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  16. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  17. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  18. The Effect of Al2O3 Addition on the Thermal Diffusivity of Heat Activated Acrylic Resin

    PubMed Central

    Atla, Jyothi; Manne, Prakash; Gopinadh, A.; Sampath, Anche; Muvva, Suresh Babu; Kishore, Krishna; Sandeep, Chiramana; Chittamsetty, Harika

    2013-01-01

    Aim: This study aimed at investigating the effect of adding 5% to 20% by weight aluminium oxide powder (Al2O3) on thermal diffusivity of heat–polymerized acrylic resin. Material and Methods: Twenty five cylindrical test specimens with an embedded thermocouple were used to determine thermal diffusivity over a physiologic temperature range (0 to 70°C). The specimens were divided into five groups (5 specimens/group) which were coded A to E. Group A was the control group (unmodified acrylic resin specimens). The specimens of the remaining four groups were reinforced with 5%, 10%, 15%, and 20% Al2O3 by weight. Results were analysed by using one–way analysis of variance (ANOVA). Results: Test specimens which belonged to Group E showed the highest mean thermal diffusivity value of 10.7mm2/sec, followed by D (9.09mm2/sec), C (8.49mm2/sec), B(8.28mm2/sec) and A(6.48mm2/sec) groups respectively. Thermal diffusivities of the reinforced acrylic resins were found to be significantly higher than that of the unmodified acrylic resin. Thermal diffusivity was found to increase in proportion to the weight percentage of alumina filler. Conclusion: Al2O3 fillers have potential to provide increased thermal diffusivity. Increasing the heat transfer characteristics of the acrylic resin base material could lead to more patient satisfaction. PMID:24086917

  19. CRPropa 3.1—a low energy extension based on stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Merten, Lukas; Becker Tjus, Julia; Fichtner, Horst; Eichmann, Björn; Sigl, Günter

    2017-06-01

    The propagation of charged cosmic rays through the Galactic environment influences all aspects of the observation at Earth. Energy spectrum, composition and arrival directions are changed due to deflections in magnetic fields and interactions with the interstellar medium. Today the transport is simulated with different simulation methods either based on the solution of a transport equation (multi-particle picture) or a solution of an equation of motion (single-particle picture). We developed a new module for the publicly available propagation software CRPropa 3.1, where we implemented an algorithm to solve the transport equation using stochastic differential equations. This technique allows us to use a diffusion tensor which is anisotropic with respect to an arbitrary magnetic background field. The source code of CRPropa is written in C++ with python steering via SWIG which makes it easy to use and computationally fast. In this paper, we present the new low-energy propagation code together with validation procedures that are developed to proof the accuracy of the new implementation. Furthermore, we show first examples of the cosmic ray density evolution, which depends strongly on the ratio of the parallel κ∥ and perpendicular κ⊥ diffusion coefficients. This dependency is systematically examined as well the influence of the particle rigidity on the diffusion process.

  20. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects

    NASA Technical Reports Server (NTRS)

    West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)

    2000-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.

  1. Multi-level trellis coded modulation and multi-stage decoding

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  2. Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface.

    PubMed

    Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang

    2017-02-08

    Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements "00", "01", "10", and "11", respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source.

  3. Dynamical beam manipulation based on 2-bit digitally-controlled coding metasurface

    PubMed Central

    Huang, Cheng; Sun, Bo; Pan, Wenbo; Cui, Jianhua; Wu, Xiaoyu; Luo, Xiangang

    2017-01-01

    Recently, a concept of digital metamaterials has been proposed to manipulate field distribution through proper spatial mixtures of digital metamaterial bits. Here, we present a design of 2-bit digitally-controlled coding metasurface that can effectively modulate the scattered electromagnetic wave and realize different far-field beams. Each meta-atom of this metasurface integrates two pin diodes, and by tuning their operating states, the metasurface has four phase responses of 0, π/2, π, and 3π/2, corresponding to four basic digital elements “00”, “01”, “10”, and “11”, respectively. By designing the coding sequence of the above digital element array, the reflected beam can be arbitrarily controlled. The proposed 2-bit digital metasurface has been demonstrated to possess capability of achieving beam deflection, multi-beam and beam diffusion, and the dynamical switching of these different scattering patterns is completed by a programmable electric source. PMID:28176870

  4. Use of the ETA-1 reactor for the validation of the multi-group APOLLO2-MORET 5 code and the Monte Carlo continuous energy MORET 5 code

    NASA Astrophysics Data System (ADS)

    Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.

    2014-06-01

    The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.

  5. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging

    PubMed Central

    Yeh, Fang-Cheng; Verstynen, Timothy D.

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions. PMID:27683539

  6. Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise

    PubMed Central

    Murakami, Hisashi

    2018-01-01

    Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions. PMID:29689074

  7. Flexible and polarization-controllable diffusion metasurface with optical transparency

    NASA Astrophysics Data System (ADS)

    Zhuang, Yaqiang; Wang, Guangming; Liang, Jiangang; Cai, Tong; Guo, Wenlong; Zhang, Qingfeng

    2017-11-01

    In this paper, a novel coding metasurface is proposed to realize polarization-controllable diffusion scattering. The anisotropic Jerusalem-cross unit cell is employed as the basic coding element due to its polarization-dependent phase response. The isotropic random coding sequence is firstly designed to obtain diffusion scattering, and the anisotropic random coding sequence is subsequently realized by adding different periodic coding sequences to the original isotropic one along different directions. For demonstration, we designed and fabricated a flexible polarization-controllable diffusion metasurface (PCDM) with both chessboard diffusion and hedge diffusion under different polarizations. The specular scattering reduction performance of the anisotropic metasurface is better than the isotropic one because the scattered energies are redirected away from the specular reflection direction. For potential applications, the flexible PCDM wrapped around a cylinder structure is investigated and tested for polarization-controllable diffusion scattering. The numerical and experimental results coincide well, indicating anisotropic low scatterings with comparable performances. This paper provides an alternative approach for designing high-performance, flexible, low-scattering platforms.

  8. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    NASA Astrophysics Data System (ADS)

    Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.

    2017-04-01

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barani, T.; Bruschi, E.; Pizzocri, D.

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less

  10. Thermodynamic Modelling of Phase Transformation in a Multi-Component System

    NASA Astrophysics Data System (ADS)

    Vala, J.

    2007-09-01

    Diffusion in multi-component alloys can be characterized by the vacancy mechanism for substitutional components, by the existence of sources and sinks for vacancies and by the motion of atoms of interstitial components. The description of diffusive and massive phase transformation of a multi-component system is based on the thermodynamic extremal principle by Onsager; the finite thickness of the interface between both phases is respected. The resulting system of partial differential equations of evolution with integral terms for unknown mole fractions (and additional variables in case of non-ideal sources and sinks for vacancies), can be analyzed using the method of lines and the finite difference technique (or, alternatively, the finite element one) together with the semi-analytic and numerical integration formulae and with certain iteration procedure, making use of the spectral properties of linear operators. The original software code for the numerical evaluation of solutions of such systems, written in MATLAB, offers a chance to simulate various real processes of diffusional phase transformation. Some results for the (nearly) steady-state real processes in substitutional alloys have been published yet. The aim of this paper is to demonstrate that the same approach can handle both substitutional and interstitial components even in case of a general system of evolution.

  11. Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Wood, William A.; vanLeer, Bram

    1999-01-01

    A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.

  12. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  13. Multi-stage decoding of multi-level modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  14. Multi-Material ALE with AMR for Modeling Hot Plasmas and Cold Fragmenting Materials

    NASA Astrophysics Data System (ADS)

    Alice, Koniges; Nathan, Masters; Aaron, Fisher; David, Eder; Wangyi, Liu; Robert, Anderson; David, Benson; Andrea, Bertozzi

    2015-02-01

    We have developed a new 3D multi-physics multi-material code, ALE-AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR) to connect the continuum to the microstructural regimes. The code is unique in its ability to model hot radiating plasmas and cold fragmenting solids. New numerical techniques were developed for many of the physics packages to work efficiently on a dynamically moving and adapting mesh. We use interface reconstruction based on volume fractions of the material components within mixed zones and reconstruct interfaces as needed. This interface reconstruction model is also used for void coalescence and fragmentation. A flexible strength/failure framework allows for pluggable material models, which may require material history arrays to determine the level of accumulated damage or the evolving yield stress in J2 plasticity models. For some applications laser rays are propagating through a virtual composite mesh consisting of the finest resolution representation of the modeled space. A new 2nd order accurate diffusion solver has been implemented for the thermal conduction and radiation transport packages. One application area is the modeling of laser/target effects including debris/shrapnel generation. Other application areas include warm dense matter, EUV lithography, and material wall interactions for fusion devices.

  15. Modifying scoping codes to accurately calculate TMI-cores with lifetimes greater than 500 effective full-power days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, D.; Levine, S.L.; Luoma, J.

    1992-01-01

    The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less

  16. Quantitative diffusion MRI using reduced field-of-view and multi-shot acquisition techniques: Validation in phantoms and prostate imaging.

    PubMed

    Zhang, Yuxin; Holmes, James; Rabanillo, Iñaki; Guidon, Arnaud; Wells, Shane; Hernando, Diego

    2018-09-01

    To evaluate the reproducibility of quantitative diffusion measurements obtained with reduced Field of View (rFOV) and Multi-shot EPI (msEPI) acquisitions, using single-shot EPI (ssEPI) as a reference. Diffusion phantom experiments, and prostate diffusion-weighted imaging in healthy volunteers and patients with known or suspected prostate cancer were performed across the three different sequences. Quantitative diffusion measurements of apparent diffusion coefficient, and diffusion kurtosis parameters (healthy volunteers), were obtained and compared across diffusion sequences (rFOV, msEPI, and ssEPI). Other possible confounding factors like b-value combinations and acquisition parameters were also investigated. Both msEPI and rFOV have shown reproducible quantitative diffusion measurements relative to ssEPI; no significant difference in ADC was observed across pulse sequences in the standard diffusion phantom (p = 0.156), healthy volunteers (p ≥ 0.12) or patients (p ≥ 0.26). The ADC values within the non-cancerous central gland and peripheral zone of patients were 1.29 ± 0.17 × 10 -3  mm 2 /s and 1.74 ± 0.23 × 10 -3  mm 2 /s respectively. However, differences in quantitative diffusion parameters were observed across different number of averages for rFOV, and across b-value groups and diffusion models for all the three sequences. Both rFOV and msEPI have the potential to provide high image quality with reproducible quantitative diffusion measurements in prostate diffusion MRI. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  18. Propel: A Discontinuous-Galerkin Finite Element Code for Solving the Reacting Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Johnson, Ryan; Kercher, Andrew; Schwer, Douglas; Corrigan, Andrew; Kailasanath, Kazhikathra

    2017-11-01

    This presentation focuses on the development of a Discontinuous Galerkin (DG) method for application to chemically reacting flows. The in-house code, called Propel, was developed by the Laboratory of Computational Physics and Fluid Dynamics at the Naval Research Laboratory. It was designed specifically for developing advanced multi-dimensional algorithms to run efficiently on new and innovative architectures such as GPUs. For these results, Propel solves for convection and diffusion simultaneously with detailed transport and thermodynamics. Chemistry is currently solved in a time-split approach using Strang-splitting with finite element DG time integration of chemical source terms. Results presented here show canonical unsteady reacting flow cases, such as co-flow and splitter plate, and we report performance for higher order DG on CPU and GPUs.

  19. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  20. Scattering of Gaussian Beams by Disordered Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.

    2016-01-01

    A frequently observed characteristic of electromagnetic scattering by a disordered particulate medium is the absence of pronounced speckles in angular patterns of the scattered light. It is known that such diffuse speckle-free scattering patterns can be caused by averaging over randomly changing particle positions and/or over a finite spectral range. To get further insight into the possible physical causes of the absence of speckles, we use the numerically exact superposition T-matrix solver of the Maxwell equations and analyze the scattering of plane-wave and Gaussian beams by representative multi-sphere groups. We show that phase and amplitude variations across an incident Gaussian beam do not serve to extinguish the pronounced speckle pattern typical of plane-wave illumination of a fixed multi-particle group. Averaging over random particle positions and/or over a finite spectral range is still required to generate the classical diffuse speckle-free regime.

  1. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  2. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  3. Development of a Renormalization Group Approach to Multi-Scale Plasma Physics Computation

    DTIC Science & Technology

    2012-03-28

    with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1...NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a . REPORT...code) 29-12-2008 Final Technical Report From 29-12-2008 To 16-95-2011 (STTR PHASE II) DEVELOPMENT OF A RENORMALIZATION GROUP APPROACH TO MULTI-SCALE

  4. 3D radiation belt diffusion model results using new empirical models of whistler chorus and hiss

    NASA Astrophysics Data System (ADS)

    Cunningham, G.; Chen, Y.; Henderson, M. G.; Reeves, G. D.; Tu, W.

    2012-12-01

    3D diffusion codes model the energization, radial transport, and pitch angle scattering due to wave-particle interactions. Diffusion codes are powerful but are limited by the lack of knowledge of the spatial & temporal distribution of waves that drive the interactions for a specific event. We present results from the 3D DREAM model using diffusion coefficients driven by new, activity-dependent, statistical models of chorus and hiss waves. Most 3D codes parameterize the diffusion coefficients or wave amplitudes as functions of magnetic activity indices like Kp, AE, or Dst. These functional representations produce the average value of the wave intensities for a given level of magnetic activity; however, the variability of the wave population at a given activity level is lost with such a representation. Our 3D code makes use of the full sample distributions contained in a set of empirical wave databases (one database for each wave type, including plasmaspheric hiss, lower and upper hand chorus) that were recently produced by our team using CRRES and THEMIS observations. The wave databases store the full probability distribution of observed wave intensity binned by AE, MLT, MLAT and L*. In this presentation, we show results that make use of the wave intensity sample probability distributions for lower-band and upper-band chorus by sampling the distributions stochastically during a representative CRRES-era storm. The sampling of the wave intensity probability distributions produces a collection of possible evolutions of the phase space density, which quantifies the uncertainty in the model predictions caused by the uncertainty of the chorus wave amplitudes for a specific event. A significant issue is the determination of an appropriate model for the spatio-temporal correlations of the wave intensities, since the diffusion coefficients are computed as spatio-temporal averages of the waves over MLT, MLAT and L*. The spatiotemporal correlations cannot be inferred from the wave databases. In this study we use a temporal correlation of ~1 hour for the sampled wave intensities that is informed by the observed autocorrelation in the AE index, a spatial correlation length of ~100 km in the two directions perpendicular to the magnetic field, and a spatial correlation length of 5000 km in the direction parallel to the magnetic field, according to the work of Santolik et al (2003), who used multi-spacecraft measurements from Cluster to quantify the correlation length scales for equatorial chorus . We find that, despite the small correlation length scale for chorus, there remains significant variability in the model outcomes driven by variability in the chorus wave intensities.

  5. On the Geometrical Optics Approach in the Theory of Freely-Localized Microwave Gas Breakdown

    NASA Astrophysics Data System (ADS)

    Shapiro, Michael; Schaub, Samuel; Hummelt, Jason; Temkin, Richard; Semenov, Vladimir

    2015-11-01

    Large filamentary arrays of high pressure gas microwave breakdown have been experimentally studied at MIT using a 110 GHz, 1.5 MW pulsed gyrotron. The experiments have been modeled by other groups using numerical codes. The plasma density distribution in the filaments can be as well analytically calculated using the geometrical optics approach neglecting plasma diffusion. The field outside the filament is a solution of an inverse electromagnetic problem. The solutions are found for the cylindrical and spherical filaments and for the multi-layered planar filaments with a finite plasma density at the boundaries. We present new results of this theory showing a variety of filaments with complex shapes. The solutions for plasma density distribution are found with a zero plasma density at the boundary of the filament. Therefore, to solve the inverse problem within the geometrical optics approximation, it can be assumed that there is no reflection from the filament. The results of this research are useful for modeling future MIT experiments.

  6. Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.

    2014-12-01

    A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.

  7. CRPropa 3.1—a low energy extension based on stochastic differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merten, Lukas; Tjus, Julia Becker; Eichmann, Björn

    The propagation of charged cosmic rays through the Galactic environment influences all aspects of the observation at Earth. Energy spectrum, composition and arrival directions are changed due to deflections in magnetic fields and interactions with the interstellar medium. Today the transport is simulated with different simulation methods either based on the solution of a transport equation (multi-particle picture) or a solution of an equation of motion (single-particle picture). We developed a new module for the publicly available propagation software CRPropa 3.1, where we implemented an algorithm to solve the transport equation using stochastic differential equations. This technique allows us tomore » use a diffusion tensor which is anisotropic with respect to an arbitrary magnetic background field. The source code of CRPropa is written in C++ with python steering via SWIG which makes it easy to use and computationally fast. In this paper, we present the new low-energy propagation code together with validation procedures that are developed to proof the accuracy of the new implementation. Furthermore, we show first examples of the cosmic ray density evolution, which depends strongly on the ratio of the parallel κ{sub ∥} and perpendicular κ{sub ⊥} diffusion coefficients. This dependency is systematically examined as well the influence of the particle rigidity on the diffusion process.« less

  8. Supernova Light Curves and Spectra from Two Different Codes: Supernu and Phoenix

    NASA Astrophysics Data System (ADS)

    Van Rossum, Daniel R; Wollaeger, Ryan T

    2014-08-01

    The observed similarities between light curve shapes from Type Ia supernovae, and in particular the correlation of light curve shape and brightness, have been actively studied for more than two decades. In recent years, hydronamic simulations of white dwarf explosions have advanced greatly, and multiple mechanisms that could potentially produce Type Ia supernovae have been explored in detail. The question which of the proposed mechanisms is (or are) possibly realized in nature remains challenging to answer, but detailed synthetic light curves and spectra from explosion simulations are very helpful and important guidelines towards answering this question.We present results from a newly developed radiation transport code, Supernu. Supernu solves the supernova radiation transfer problem uses a novel technique based on a hybrid between Implicit Monte Carlo and Discrete Diffusion Monte Carlo. This technique enhances the efficiency with respect to traditional implicit monte carlo codes and thus lends itself perfectly for multi-dimensional simulations. We show direct comparisons of light curves and spectra from Type Ia simulations with Supernu versus the legacy Phoenix code.

  9. Optimal multi-community network modularity for information diffusion

    NASA Astrophysics Data System (ADS)

    Wu, Jiaocan; Du, Ruping; Zheng, Yingying; Liu, Dong

    2016-02-01

    Studies demonstrate that community structure plays an important role in information spreading recently. In this paper, we investigate the impact of multi-community structure on information diffusion with linear threshold model. We utilize extended GN network that contains four communities and analyze dynamic behaviors of information that spreads on it. And we discover the optimal multi-community network modularity for information diffusion based on the social reinforcement. Results show that, within the appropriate range, multi-community structure will facilitate information diffusion instead of hindering it, which accords with the results derived from two-community network.

  10. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  11. Three-tier multi-granularity switching system based on PCE

    NASA Astrophysics Data System (ADS)

    Wang, Yubao; Sun, Hao; Liu, Yanfei

    2017-10-01

    With the growing demand for business communications, electrical signal processing optical path switching can't meet the demand. The multi-granularity switch system that can improve node routing and switching capabilities came into being. In the traditional network, each node is responsible for calculating the path; synchronize the whole network state, which will increase the burden on the network, so the concept of path calculation element (PCE) is proposed. The PCE is responsible for routing and allocating resources in the network1. In the traditional band-switched optical network, the wavelength is used as the basic routing unit, resulting in relatively low wavelength utilization. Due to the limitation of wavelength continuity, the routing design of the band technology becomes complicated, which directly affects the utilization of the system. In this paper, optical code granularity is adopted. There is no continuity of the optical code, and the number of optical codes is more flexible than the wavelength. For the introduction of optical code switching, we propose a Code Group Routing Entity (CGRE) algorithm. In short, the combination of three-tier multi-granularity optical switching system and PCE can simplify the network structure, reduce the node load, and enhance the network scalability and survivability. Realize the intelligentization of optical network.

  12. Employing Nested OpenMP for the Parallelization of Multi-Zone Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Jost, Gabriele

    2004-01-01

    In this paper we describe the parallelization of the multi-zone code versions of the NAS Parallel Benchmarks employing multi-level OpenMP parallelism. For our study we use the NanosCompiler, which supports nesting of OpenMP directives and provides clauses to control the grouping of threads, load balancing, and synchronization. We report the benchmark results, compare the timings with those of different hybrid parallelization paradigms and discuss OpenMP implementation issues which effect the performance of multi-level parallel applications.

  13. Sensors for measurement of moisture diffusion in power cables with oil-impregnated paper

    NASA Astrophysics Data System (ADS)

    Thomas, Z. M.; Zahn, M.; Yang, W.

    2007-07-01

    Some old power cables use oil-impregnated paper as the insulation material, which is enclosed by a layer of lead sheath. As cracks can form on the sheath of aged cables, the oil-impregnated paper can be exposed to the environmental conditions, and ambient moisture can diffuse into the paper through the cracks, causing a reduced breakdown voltage. To understand this diffusion phenomenon, multi-wavelength dielectrometry sensors have been used to measure permittivity and conductivity, aiming to obtain information on the moisture content. Different electrode-grouping strategies have been suggested to obtain more detailed information. Effectively, an electrode-grouping approach forms a type of electrical capacitance tomography sensor. This paper presents different sensor designs together with a capacitance measuring circuit. Some analytical results are also presented.

  14. Parameterization of Small-Scale Processes

    DTIC Science & Technology

    1989-09-01

    1989, Honolulu, Hawaii !7 COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FELD GROUP SIJB- GROUP general...detailed sensitivit. studies to assess the dependence of results on the edd\\ viscosities and diffusivities by a direct comparison with certain observations...better sub-grid scale parameterization is to mount a concerted s .arch for model fits to observations. These would require exhaustive sensitivity studies

  15. Motion immune diffusion imaging using augmented MUSE (AMUSE) for high-resolution multi-shot EPI

    PubMed Central

    Guhaniyogi, Shayan; Chu, Mei-Lan; Chang, Hing-Chiu; Song, Allen W.; Chen, Nan-kuei

    2015-01-01

    Purpose To develop new techniques for reducing the effects of microscopic and macroscopic patient motion in diffusion imaging acquired with high-resolution multi-shot EPI. Theory The previously reported Multiplexed Sensitivity Encoding (MUSE) algorithm is extended to account for macroscopic pixel misregistrations as well as motion-induced phase errors in a technique called Augmented MUSE (AMUSE). Furthermore, to obtain more accurate quantitative DTI measures in the presence of subject motion, we also account for the altered diffusion encoding among shots arising from macroscopic motion. Methods MUSE and AMUSE were evaluated on simulated and in vivo motion-corrupted multi-shot diffusion data. Evaluations were made both on the resulting imaging quality and estimated diffusion tensor metrics. Results AMUSE was found to reduce image blurring resulting from macroscopic subject motion compared to MUSE, but yielded inaccurate tensor estimations when neglecting the altered diffusion encoding. Including the altered diffusion encoding in AMUSE produced better estimations of diffusion tensors. Conclusion The use of AMUSE allows for improved image quality and diffusion tensor accuracy in the presence of macroscopic subject motion during multi-shot diffusion imaging. These techniques should facilitate future high-resolution diffusion imaging. PMID:25762216

  16. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  17. Multisite longitudinal reliability of tract-based spatial statistics in diffusion tensor imaging of healthy elderly subjects.

    PubMed

    Jovicich, Jorge; Marizzoni, Moira; Bosch, Beatriz; Bartrés-Faz, David; Arnold, Jennifer; Benninghoff, Jens; Wiltfang, Jens; Roccatagliata, Luca; Picco, Agnese; Nobili, Flavio; Blin, Oliver; Bombois, Stephanie; Lopes, Renaud; Bordet, Régis; Chanoine, Valérie; Ranjeva, Jean-Philippe; Didic, Mira; Gros-Dagnac, Hélène; Payoux, Pierre; Zoccatelli, Giada; Alessandrini, Franco; Beltramello, Alberto; Bargalló, Núria; Ferretti, Antonio; Caulo, Massimo; Aiello, Marco; Ragucci, Monica; Soricelli, Andrea; Salvadori, Nicola; Tarducci, Roberto; Floridi, Piero; Tsolaki, Magda; Constantinidis, Manos; Drevelegas, Antonios; Rossini, Paolo Maria; Marra, Camillo; Otto, Josephin; Reiss-Zimmermann, Martin; Hoffmann, Karl-Titus; Galluzzi, Samantha; Frisoni, Giovanni B

    2014-11-01

    Large-scale longitudinal neuroimaging studies with diffusion imaging techniques are necessary to test and validate models of white matter neurophysiological processes that change in time, both in healthy and diseased brains. The predictive power of such longitudinal models will always be limited by the reproducibility of repeated measures acquired during different sessions. At present, there is limited quantitative knowledge about the across-session reproducibility of standard diffusion metrics in 3T multi-centric studies on subjects in stable conditions, in particular when using tract based spatial statistics and with elderly people. In this study we implemented a multi-site brain diffusion protocol in 10 clinical 3T MRI sites distributed across 4 countries in Europe (Italy, Germany, France and Greece) using vendor provided sequences from Siemens (Allegra, Trio Tim, Verio, Skyra, Biograph mMR), Philips (Achieva) and GE (HDxt) scanners. We acquired DTI data (2 × 2 × 2 mm(3), b = 700 s/mm(2), 5 b0 and 30 diffusion weighted volumes) of a group of healthy stable elderly subjects (5 subjects per site) in two separate sessions at least a week apart. For each subject and session four scalar diffusion metrics were considered: fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD) and axial (AD) diffusivity. The diffusion metrics from multiple subjects and sessions at each site were aligned to their common white matter skeleton using tract-based spatial statistics. The reproducibility at each MRI site was examined by looking at group averages of absolute changes relative to the mean (%) on various parameters: i) reproducibility of the signal-to-noise ratio (SNR) of the b0 images in centrum semiovale, ii) full brain test-retest differences of the diffusion metric maps on the white matter skeleton, iii) reproducibility of the diffusion metrics on atlas-based white matter ROIs on the white matter skeleton. Despite the differences of MRI scanner configurations across sites (vendors, models, RF coils and acquisition sequences) we found good and consistent test-retest reproducibility. White matter b0 SNR reproducibility was on average 7 ± 1% with no significant MRI site effects. Whole brain analysis resulted in no significant test-retest differences at any of the sites with any of the DTI metrics. The atlas-based ROI analysis showed that the mean reproducibility errors largely remained in the 2-4% range for FA and AD and 2-6% for MD and RD, averaged across ROIs. Our results show reproducibility values comparable to those reported in studies using a smaller number of MRI scanners, slightly different DTI protocols and mostly younger populations. We therefore show that the acquisition and analysis protocols used are appropriate for multi-site experimental scenarios. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Boundary value problems for multi-term fractional differential equations

    NASA Astrophysics Data System (ADS)

    Daftardar-Gejji, Varsha; Bhalekar, Sachin

    2008-09-01

    Multi-term fractional diffusion-wave equation along with the homogeneous/non-homogeneous boundary conditions has been solved using the method of separation of variables. It is observed that, unlike in the one term case, solution of multi-term fractional diffusion-wave equation is not necessarily non-negative, and hence does not represent anomalous diffusion of any kind.

  19. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    DOE PAGES

    Barani, T.; Bruschi, E.; Pizzocri, D.; ...

    2017-01-03

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less

  20. Measurements of angular flux on surface of Li/sub 2/O slab assemblies and their analysis by a direct integration transport code ''BERMUDA''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, H.; Oyama, Y.

    1983-09-01

    Angle-dependent neutron leakage spectra above 0.5 MeV from Li/sub 2/O slab assemblies were measured accurately by the time-of-flight method. The measured angles were 0/sup 0/, 12.2/sup 0/, 24.9/sup 0/, 41.8/sup 0/ and 66.8/sup 0/. The sizes of Li/sub 2/O assemblies were 31.4 cm in equivalent radius and 5.06, 20.24 and 40.48 cm in thickness. The data were analyzed by a new transport code ''BERMUDA-2DN''. Time-independent transport equation is solved for two-dimensional, cylindrical, multi-regional geometry using the direct integration method in a multi-group model. The group transfer kernels are accurately obtained from the double-differential cross section data without using Legendre expansion.more » The results were compared absolutely. While there exist discrepancies partially, the calculational spectra agree well with the experimental ones as a whole. The BERMUDA code was demonstrated to be useful for the analyses of the fusion neutronics and shielding.« less

  1. Progress on China nuclear data processing code system

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  2. Parametric analysis of diffuser requirements for high expansion ratio space engine

    NASA Technical Reports Server (NTRS)

    Wojciechowski, C. J.; Anderson, P. G.

    1981-01-01

    A supersonic diffuser ejector design computer program was developed. Using empirically modified one dimensional flow methods the diffuser ejector geometry is specified by the code. The design code results for calculations up to the end of the diffuser second throat were verified. Diffuser requirements for sea level testing of high expansion ratio space engines were defined. The feasibility of an ejector system using two commonly available turbojet engines feeding two variable area ratio ejectors was demonstrated.

  3. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaeger, Ryan T.; Van Rossum, Daniel R., E-mail: wollaeger@wisc.edu, E-mail: daan@flash.uchicago.edu

    Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) are methods used to stochastically solve the radiative transport and diffusion equations, respectively. These methods combine into a hybrid transport-diffusion method we refer to as IMC-DDMC. We explore a multigroup IMC-DDMC scheme that in DDMC, combines frequency groups with sufficient optical thickness. We term this procedure ''opacity regrouping''. Opacity regrouping has previously been applied to IMC-DDMC calculations for problems in which the dependence of the opacity on frequency is monotonic. We generalize opacity regrouping to non-contiguous groups and implement this in SuperNu, a code designed to do radiation transport inmore » high-velocity outflows with non-monotonic opacities. We find that regrouping of non-contiguous opacity groups generally improves the speed of IMC-DDMC radiation transport. We present an asymptotic analysis that informs the nature of the Doppler shift in DDMC groups and summarize the derivation of the Gentile-Fleck factor for modified IMC-DDMC. We test SuperNu using numerical experiments including a quasi-manufactured analytic solution, a simple 10 group problem, and the W7 problem for Type Ia supernovae. We find that opacity regrouping is necessary to make our IMC-DDMC implementation feasible for the W7 problem and possibly Type Ia supernova simulations in general. We compare the bolometric light curves and spectra produced by the SuperNu and PHOENIX radiation transport codes for the W7 problem. The overall shape of the bolometric light curves are in good agreement, as are the spectra and their evolution with time. However, for the numerical specifications we considered, we find that the peak luminosity of the light curve calculated using SuperNu is ∼10% less than that calculated using PHOENIX.« less

  5. Radiation Transport for Explosive Outflows: Opacity Regrouping

    NASA Astrophysics Data System (ADS)

    Wollaeger, Ryan T.; van Rossum, Daniel R.

    2014-10-01

    Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) are methods used to stochastically solve the radiative transport and diffusion equations, respectively. These methods combine into a hybrid transport-diffusion method we refer to as IMC-DDMC. We explore a multigroup IMC-DDMC scheme that in DDMC, combines frequency groups with sufficient optical thickness. We term this procedure "opacity regrouping." Opacity regrouping has previously been applied to IMC-DDMC calculations for problems in which the dependence of the opacity on frequency is monotonic. We generalize opacity regrouping to non-contiguous groups and implement this in SuperNu, a code designed to do radiation transport in high-velocity outflows with non-monotonic opacities. We find that regrouping of non-contiguous opacity groups generally improves the speed of IMC-DDMC radiation transport. We present an asymptotic analysis that informs the nature of the Doppler shift in DDMC groups and summarize the derivation of the Gentile-Fleck factor for modified IMC-DDMC. We test SuperNu using numerical experiments including a quasi-manufactured analytic solution, a simple 10 group problem, and the W7 problem for Type Ia supernovae. We find that opacity regrouping is necessary to make our IMC-DDMC implementation feasible for the W7 problem and possibly Type Ia supernova simulations in general. We compare the bolometric light curves and spectra produced by the SuperNu and PHOENIX radiation transport codes for the W7 problem. The overall shape of the bolometric light curves are in good agreement, as are the spectra and their evolution with time. However, for the numerical specifications we considered, we find that the peak luminosity of the light curve calculated using SuperNu is ~10% less than that calculated using PHOENIX.

  6. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.

    PubMed

    Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q

    2013-03-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

  7. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION

    PubMed Central

    Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.

    2013-01-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179

  8. COMPLETE DETERMINATION OF POLARIZATION FOR A HIGH-ENERGY DEUTERON BEAM (thesis)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Button, J

    1959-05-01

    please delete the no. 17076<>13:017077The P/sub 1/ multigroup code was written for the IBM-704 in order to determine the accuracy of the few- group diffusion scheme with various imposed conditions and also to provide an alternate computational method when this scheme fails to be sufficiently accurate. The code solves for the spatially dependent multigroup flux, taking into account such nuclear phenomena is slowing down of neutrons resulting from elastic and inelastic scattering, the removal of neutrons resulting from epithermal capture and fission resonances, and the regeneration of fist neutrons resulting from fissioning which may occur in any of as manymore » as 80 fast multigroups or in the one thermal group. The code will accept as input a physical description of the reactor (that is: slab, cylindrical, or spherical geometry, number of points and regions, composition description group dependent boundary condition, transverse buckling, and mesh sizes) and a prepared library of nuclear properties of all the isotopes in each composition. The code will produce as output multigroup fluxes, currents, and isotopic slowing-down densities, in addition to pointwise and regionwise few-group macroscopic cross sections. (auth)« less

  9. Parallel Task Management Library for MARTe

    NASA Astrophysics Data System (ADS)

    Valcarcel, Daniel F.; Alves, Diogo; Neto, Andre; Reux, Cedric; Carvalho, Bernardo B.; Felton, Robert; Lomas, Peter J.; Sousa, Jorge; Zabeo, Luca

    2014-06-01

    The Multithreaded Application Real-Time executor (MARTe) is a real-time framework with increasing popularity and support in the thermonuclear fusion community. It allows modular code to run in a multi-threaded environment leveraging on the current multi-core processor (CPU) technology. One application that relies on the MARTe framework is the Joint European Torus (JET) tokamak WAll Load Limiter System (WALLS). It calculates and monitors the temperature on metal tiles and plasma facing components (PFCs) that can melt or flake if their temperature gets too high when exposed to power loads. One of the main time consuming tasks in WALLS is the calculation of thermal diffusion models in real-time. These models tend to be described by very large state-space models thus making them perfect candidates for parallelisation. MARTe's traditional approach for task parallelisation is to split the problem into several Real-Time Threads, each responsible for a self-contained sequential execution of an input-to-output chain. This is usually possible, but it might not always be practical for algorithmic or technical reasons. Also, it might not be easily scalable with an increase in the number of available CPU cores. The WorkLibrary introduces a “GPU-like approach” of splitting work among the available cores of modern CPUs that is (i) straightforward to use in an application, (ii) scalable with the availability of cores and all of this (iii) without rewriting or recompiling the source code. The first part of this article explains the motivation behind the library, its architecture and implementation. The second part presents a real application for WALLS, a parallel version of a large state-space model describing the 2D thermal diffusion on a JET tile.

  10. Multi-modal measurement of the myelin-to-axon diameter g-ratio in preterm-born neonates and adult controls.

    PubMed

    Melbourne, Andrew; Eaton-Rosen, Zach; De Vita, Enrico; Bainbridge, Alan; Cardoso, Manuel Jorge; Price, David; Cady, Ernest; Kendall, Giles S; Robertson, Nicola J; Marlow, Neil; Ourselin, Sébastien

    2014-01-01

    Infants born prematurely are at increased risk of adverse functional outcome. The measurement of white matter tissue composition and structure can help predict functional performance and this motivates the search for new multi-modal imaging biomarkers. In this work we develop a novel combined biomarker from diffusion MRI and multi-component T2 relaxation measurements in a group of infants born very preterm and scanned between 30 and 40 weeks equivalent gestational age. We also investigate this biomarker on a group of seven adult controls, using a multi-modal joint model-fitting strategy. The proposed emergent biomarker is tentatively related to axonal energetic efficiency (in terms of axonal membrane charge storage) and conduction velocity and is thus linked to the tissue electrical properties, giving it a good theoretical justification as a predictive measurement of functional outcome.

  11. The development of a thermal hydraulic feedback mechanism with a quasi-fixed point iteration scheme for control rod position modeling for the TRIGSIMS-TH application

    NASA Astrophysics Data System (ADS)

    Karriem, Veronica V.

    Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.

  12. Multi-Touch Tablets, E-Books, and an Emerging Multi-Coding/Multi-Sensory Theory for Reading Science E-Textbooks: Considering the Struggling Reader

    ERIC Educational Resources Information Center

    Rupley, William H.; Paige, David D.; Rasinski, Timothy V.; Slough, Scott W.

    2015-01-01

    Pavio's Dual-Coding Theory (1991) and Mayer's Multimedia Principal (2000) form the foundation for proposing a multi-coding theory centered around Multi-Touch Tablets and the newest generation of e-textbooks to scaffold struggling readers in reading and learning from science textbooks. Using E. O. Wilson's "Life on Earth: An Introduction"…

  13. Multi-D Full Boltzmann Neutrino Hydrodynamic Simulations in Core Collapse Supernovae and their detailed comparison with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Nagakura, Hiroki; Richers, Sherwood; Ott, Christian; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-01-01

    We have developed a multi-d radiation-hydrodynamic code which solves first-principles Boltzmann equation for neutrino transport. It is currently applicable specifically for core-collapse supernovae (CCSNe), but we will extend their applicability to further extreme phenomena such as black hole formation and coalescence of double neutron stars. In this meeting, I will discuss about two things; (1) detailed comparison with a Monte-Carlo neutrino transport (2) axisymmetric CCSNe simulations. The project (1) gives us confidence of our code. The Monte-Carlo code has been developed by Caltech group and it is specialized to obtain a steady state. Among CCSNe community, this is the first attempt to compare two different methods for multi-d neutrino transport. I will show the result of these comparison. For the project (2), I particularly focus on the property of neutrino distribution function in the semi-transparent region where only first-principle Boltzmann solver can appropriately handle the neutrino transport. In addition to these analyses, I will also discuss the ``explodability'' by neutrino heating mechanism.

  14. Chemistry Resolved Kinetic Flow Modeling of TATB Based Explosives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P A; Fried, L E; Howard, W M

    2011-07-21

    Detonation waves in insensitive, TATB based explosives are believed to have multi-time scale regimes. The initial burn rate of such explosives has a sub-microsecond time scale. However, significant late-time slow release in energy is believed to occur due to diffusion limited growth of carbon. In the intermediate time scale concentrations of product species likely change from being in equilibrium to being kinetic rate controlled. They use the thermo-chemical code CHEETAH linked to an ALE hydrodynamics code to model detonations. They term their model chemistry resolved kinetic flow as CHEETAH tracks the time dependent concentrations of individual species in the detonationmore » wave and calculates EOS values based on the concentrations. A HE-validation suite of model simulations compared to experiments at ambient, hot, and cold temperatures has been developed. They present here a new rate model and comparison with experimental data.« less

  15. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  16. Anisotropic Azimuthal Power and Temperature distribution on FuelRod. Impact on Hydride Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motta, Arthur; Ivanov, Kostadin; Arramova, Maria

    2015-04-29

    The degradation of the zirconium cladding may limit nuclear fuel performance. In the high temperature environment of a reactor, the zirconium in the cladding corrodes, releasing hydrogen in the process. Some of this hydrogen is absorbed by the cladding in a highly inhomogeneous manner. The distribution of the absorbed hydrogen is extremely sensitive to temperature and stress concentration gradients. The absorbed hydrogen tends to concentrate near lower temperatures. This hydrogen absorption and hydride formation can cause cladding failure. This project set out to improve the hydrogen distribution prediction capabilities of the BISON fuel performance code. The project was split intomore » two primary sections, first was the use of a high fidelity multi-physics coupling to accurately predict temperature gradients as a function of r, θ , and z, and the second was to use experimental data to create an analytical hydrogen precipitation model. The Penn State version of thermal hydraulics code COBRA-TF (CTF) was successfully coupled to the DeCART neutronics code. This coupled system was verified by testing and validated by comparison to FRAPCON data. The hydrogen diffusion and precipitation experiments successfully calculated the heat of transport and precipitation rate constant values to be used within the hydrogen model in BISON. These values can only be determined experimentally. These values were successfully implemented in precipitation, diffusion and dissolution kernels that were implemented in the BISON code. The coupled output was fed into BISON models and the hydrogen and hydride distributions behaved as expected. Simulations were conducted in the radial, axial and azimuthal directions to showcase the full capabilities of the hydrogen model.« less

  17. High-resolution diffusion tensor imaging of the human kidneys using a free-breathing, multi-slice, targeted field of view approach

    PubMed Central

    Chan, Rachel W; Von Deuster, Constantin; Stoeck, Christian T; Harmer, Jack; Punwani, Shonit; Ramachandran, Navin; Kozerke, Sebastian; Atkinson, David

    2014-01-01

    Fractional anisotropy (FA) obtained by diffusion tensor imaging (DTI) can be used to image the kidneys without any contrast media. FA of the medulla has been shown to correlate with kidney function. It is expected that higher spatial resolution would improve the depiction of small structures within the kidney. However, the achievement of high spatial resolution in renal DTI remains challenging as a result of respiratory motion and susceptibility to diffusion imaging artefacts. In this study, a targeted field of view (TFOV) method was used to obtain high-resolution FA maps and colour-coded diffusion tensor orientations, together with measures of the medullary and cortical FA, in 12 healthy subjects. Subjects were scanned with two implementations (dual and single kidney) of a TFOV DTI method. DTI scans were performed during free breathing with a navigator-triggered sequence. Results showed high consistency in the greyscale FA, colour-coded FA and diffusion tensors across subjects and between dual- and single-kidney scans, which have in-plane voxel sizes of 2 × 2 mm2 and 1.2 × 1.2 mm2, respectively. The ability to acquire multiple contiguous slices allowed the medulla and cortical FA to be quantified over the entire kidney volume. The mean medulla and cortical FA values were 0.38 ± 0.017 and 0.21 ± 0.019, respectively, for the dual-kidney scan, and 0.35 ± 0.032 and 0.20 ± 0.014, respectively, for the single-kidney scan. The mean FA between the medulla and cortex was significantly different (p < 0.001) for both dual- and single-kidney implementations. High-spatial-resolution DTI shows promise for improving the characterization and non-invasive assessment of kidney function. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd. PMID:25219683

  18. High-resolution diffusion tensor imaging of the human kidneys using a free-breathing, multi-slice, targeted field of view approach.

    PubMed

    Chan, Rachel W; Von Deuster, Constantin; Stoeck, Christian T; Harmer, Jack; Punwani, Shonit; Ramachandran, Navin; Kozerke, Sebastian; Atkinson, David

    2014-11-01

    Fractional anisotropy (FA) obtained by diffusion tensor imaging (DTI) can be used to image the kidneys without any contrast media. FA of the medulla has been shown to correlate with kidney function. It is expected that higher spatial resolution would improve the depiction of small structures within the kidney. However, the achievement of high spatial resolution in renal DTI remains challenging as a result of respiratory motion and susceptibility to diffusion imaging artefacts. In this study, a targeted field of view (TFOV) method was used to obtain high-resolution FA maps and colour-coded diffusion tensor orientations, together with measures of the medullary and cortical FA, in 12 healthy subjects. Subjects were scanned with two implementations (dual and single kidney) of a TFOV DTI method. DTI scans were performed during free breathing with a navigator-triggered sequence. Results showed high consistency in the greyscale FA, colour-coded FA and diffusion tensors across subjects and between dual- and single-kidney scans, which have in-plane voxel sizes of 2 × 2 mm(2) and 1.2 × 1.2 mm(2) , respectively. The ability to acquire multiple contiguous slices allowed the medulla and cortical FA to be quantified over the entire kidney volume. The mean medulla and cortical FA values were 0.38 ± 0.017 and 0.21 ± 0.019, respectively, for the dual-kidney scan, and 0.35 ± 0.032 and 0.20 ± 0.014, respectively, for the single-kidney scan. The mean FA between the medulla and cortex was significantly different (p < 0.001) for both dual- and single-kidney implementations. High-spatial-resolution DTI shows promise for improving the characterization and non-invasive assessment of kidney function. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.

  19. Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways

    PubMed Central

    Galinsky, Vitaly L.; Frank, Lawrence R.

    2015-01-01

    We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167

  20. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    PubMed Central

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.

    2015-01-01

    Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophos­phodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis. PMID:26249347

  1. Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles

    DOE PAGES

    Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; ...

    2015-07-28

    Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier'smore » equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. In addition, these methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less

  2. Multiple echo multi-shot diffusion sequence.

    PubMed

    Chabert, Steren; Galindo, César; Tejos, Cristian; Uribe, Sergio A

    2014-04-01

    To measure both transversal relaxation time (T2 ) and diffusion coefficients within a single scan using a multi-shot approach. Both measurements have drawn interest in many applications, especially in skeletal muscle studies, which have short T2 values. Multiple echo single-shot schemes have been proposed to obtain those variables simultaneously within a single scan, resulting in a reduction of the scanning time. However, one problem with those approaches is the associated long echo read-out. Consequently, the minimum achievable echo time tends to be long, limiting the application of these sequences to tissues with relatively long T2 . To address this problem, we propose to extend the multi-echo sequences using a multi-shot approach, so that to allow shorter echo times. A multi-shot dual-echo EPI sequence with diffusion gradients and echo navigators was modified to include independent diffusion gradients in any of the two echoes. The multi-shot approach allows us to drastically reduce echo times. Results showed a good agreement for the T2 and mean diffusivity measurements with gold standard sequences in phantoms and in vivo data of calf muscles from healthy volunteers. A fast and accurate method is proposed to measure T2 and diffusion coefficients simultaneously, tested in vitro and in healthy volunteers. Copyright © 2013 Wiley Periodicals, Inc.

  3. Groups in the radiative transfer theory

    NASA Astrophysics Data System (ADS)

    Nikoghossian, Arthur

    2016-11-01

    The paper presents a group-theoretical description of radiation transfer in inhomogeneous and multi-component atmospheres with the plane-parallel geometry. It summarizes and generalizes the results obtained recently by the author for some standard transfer problems of astrophysical interest with allowance of the angle and frequency distributions of the radiation field. We introduce the concept of composition groups for media with different optical and physical properties. Group representations are derived for two possible cases of illumination of a composite finite atmosphere. An algorithm for determining the reflectance and transmittance of inhomogeneous and multi-component atmospheres is described. The group theory is applied also to determining the field of radiation inside an inhomogeneous atmosphere. The concept of a group of optical depth translations is introduced. The developed theory is illustrated with the problem of radiation diffusion with partial frequency distribution assuming that the inhomogeneity is due to depth-variation of the scattering coefficient. It is shown that once reflectance and transmittance of a medium are determined, the internal field of radiation in the source-free atmosphere is found without solving any new equations. The transfer problems for a semi-infinite atmosphere and an atmosphere with internal sources of energy are discussed. The developed theory allows to derive summation laws for the mean number of scattering events underwent by the photons in the course of diffusion in the atmosphere.

  4. Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction

    NASA Astrophysics Data System (ADS)

    Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim

    2018-03-01

    ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction.

    Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields.

    The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.

  5. Potential of pin-by-pin SPN calculations as an industrial reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fliscounakis, M.; Girardi, E.; Courau, T.

    2012-07-01

    This paper aims at analysing the potential of pin-by-pin SP{sub n} calculations to compute the neutronic flux in PWR cores as an alternative to the diffusion approximation. As far as pin-by-pin calculations are concerned, a SPH equivalence is used to preserve the reactions rates. The use of SPH equivalence is a common practice in core diffusion calculations. In this paper, a methodology to generalize the equivalence procedure in the SP{sub n} equations context is presented. In order to verify and validate the equivalence procedure, SP{sub n} calculations are compared to 2D transport reference results obtained with the APOLL02 code. Themore » validation cases consist in 3x3 analytical assembly color sets involving burn-up heterogeneities, UOX/MOX interfaces, and control rods. Considering various energy discretizations (up to 26 groups) and flux development orders (up to 7) for the SP{sub n} equations, results show that 26-group SP{sub 3} calculations are very close to the transport reference (with pin production rates discrepancies < 1%). This proves the high interest of pin-by-pin SP{sub n} calculations as an industrial reference when relying on 26 energy groups combined with SP{sub 3} flux development order. Additionally, the SP{sub n} results are compared to diffusion pin-by-pin calculations, in order to evaluate the potential benefit of using a SP{sub n} solver as an alternative to diffusion. Discrepancies on pin-production rates are less than 1.6% for 6-group SP{sub 3} calculations against 3.2% for 2-group diffusion calculations. This shows that SP{sub n} solvers may be considered as an alternative to multigroup diffusion. (authors)« less

  6. Efficient CO2 capture by functionalized graphene oxide nanosheets as fillers to fabricate multi-permselective mixed matrix membranes.

    PubMed

    Li, Xueqin; Cheng, Youdong; Zhang, Haiyang; Wang, Shaofei; Jiang, Zhongyi; Guo, Ruili; Wu, Hong

    2015-03-11

    A novel multi-permselective mixed matrix membrane (MP-MMM) is developed by incorporating versatile fillers functionalized with ethylene oxide (EO) groups and an amine carrier into a polymer matrix. The as-prepared MP-MMMs can separate CO2 efficiently because of the simultaneous enhancement of diffusivity selectivity, solubility selectivity, and reactivity selectivity. To be specific, MP-MMMs were fabricated by incorporating polyethylene glycol- and polyethylenimine-functionalized graphene oxide nanosheets (PEG-PEI-GO) into a commercial low-cost Pebax matrix. The PEG-PEI-GO plays multiple roles in enhancing membrane performance. First, the high-aspect ratio GO nanosheets in a polymer matrix increase the length of the tortuous path of gas diffusion and generate a rigidified interface between the polymer matrix and fillers, enhancing the diffusivity selectivity. Second, PEG consisting of EO groups has excellent affinity for CO2 to enhance the solubility selectivity. Third, PEI with abundant primary, secondary, and tertiary amine groups reacts reversibly with CO2 to enhance reactivity selectivity. Thus, the as-prepared MP-MMMs exhibit excellent CO2 permeability and CO2/gas selectivity. The MP-MMM doped with 10 wt % PEG-PEI-GO displays optimal gas separation performance with a CO2 permeability of 1330 Barrer, a CO2/CH4 selectivity of 45, and a CO2/N2 selectivity of 120, surpassing the upper bound lines of the Robeson study of 2008 (1 Barrer = 10(-10) cm(3) (STP) cm(-2) s(-1) cm(-1) Hg).

  7. Eddy current-nulled convex optimized diffusion encoding (EN-CODE) for distortion-free diffusion tensor imaging with short echo times.

    PubMed

    Aliotta, Eric; Moulin, Kévin; Ennis, Daniel B

    2018-02-01

    To design and evaluate eddy current-nulled convex optimized diffusion encoding (EN-CODE) gradient waveforms for efficient diffusion tensor imaging (DTI) that is free of eddy current-induced image distortions. The EN-CODE framework was used to generate diffusion-encoding waveforms that are eddy current-compensated. The EN-CODE DTI waveform was compared with the existing eddy current-nulled twice refocused spin echo (TRSE) sequence as well as monopolar (MONO) and non-eddy current-compensated CODE in terms of echo time (TE) and image distortions. Comparisons were made in simulations, phantom experiments, and neuro imaging in 10 healthy volunteers. The EN-CODE sequence achieved eddy current compensation with a significantly shorter TE than TRSE (78 versus 96 ms) and a slightly shorter TE than MONO (78 versus 80 ms). Intravoxel signal variance was lower in phantoms with EN-CODE than with MONO (13.6 ± 11.6 versus 37.4 ± 25.8) and not different from TRSE (15.1 ± 11.6), indicating good robustness to eddy current-induced image distortions. Mean fractional anisotropy values in brain edges were also significantly lower with EN-CODE than with MONO (0.16 ± 0.01 versus 0.24 ± 0.02, P < 1 x 10 -5 ) and not different from TRSE (0.16 ± 0.01 versus 0.16 ± 0.01, P = nonsignificant). The EN-CODE sequence eliminated eddy current-induced image distortions in DTI with a TE comparable to MONO and substantially shorter than TRSE. Magn Reson Med 79:663-672, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Color-coded fluid-attenuated inversion recovery images improve inter-rater reliability of fluid-attenuated inversion recovery signal changes within acute diffusion-weighted image lesions.

    PubMed

    Kim, Bum Joon; Kim, Yong-Hwan; Kim, Yeon-Jung; Ahn, Sung Ho; Lee, Deok Hee; Kwon, Sun U; Kim, Sang Joon; Kim, Jong S; Kang, Dong-Wha

    2014-09-01

    Diffusion-weighted image fluid-attenuated inversion recovery (FLAIR) mismatch has been considered to represent ischemic lesion age. However, the inter-rater agreement of diffusion-weighted image FLAIR mismatch is low. We hypothesized that color-coded images would increase its inter-rater agreement. Patients with ischemic stroke <24 hours of a clear onset were retrospectively studied. FLAIR signal change was rated as negative, subtle, or obvious on conventional and color-coded FLAIR images based on visual inspection. Inter-rater agreement was evaluated using κ and percent agreement. The predictive value of diffusion-weighted image FLAIR mismatch for identification of patients <4.5 hours of symptom onset was evaluated. One hundred and thirteen patients were enrolled. The inter-rater agreement of FLAIR signal change improved from 69.9% (k=0.538) with conventional images to 85.8% (k=0.754) with color-coded images (P=0.004). Discrepantly rated patients on conventional, but not on color-coded images, had a higher prevalence of cardioembolic stroke (P=0.02) and cortical infarction (P=0.04). The positive predictive value for patients <4.5 hours of onset was 85.3% and 71.9% with conventional and 95.7% and 82.1% with color-coded images, by each rater. Color-coded FLAIR images increased the inter-rater agreement of diffusion-weighted image FLAIR recovery mismatch and may ultimately help identify unknown-onset stroke patients appropriate for thrombolysis. © 2014 American Heart Association, Inc.

  9. Kinetic Effects in Inertial Confinement Fusion

    NASA Astrophysics Data System (ADS)

    Kagan, Grigory

    2014-10-01

    Sharp background gradients, inevitably introduced during ICF implosion, are likely responsible for the discrepancy between the predictions of the standard single-fluid rad-hydro codes and the experimental observations. On the one hand, these gradients drive the inter-ion-species transport, so the fuel composition no longer remains constant, unlike what the single-fluid codes assume. On the other hand, once the background scale is comparable to the mean free path, a fluid description becomes invalid. This point takes on special significance in plasmas, where the particle's mean free path scales with the square of this particle's energy. The distribution function of energetic ions may therefore be far from Maxwellian, even if thermal ions are nearly equilibrated. Ironically, it is these energetic, or tail, ions that are supposed to fuse at the onset of ignition. A combination of studies has been conducted to clarify the role of such kinetic effects on ICF performance. First, transport formalism applicable to multi-component plasmas has been developed. In particular, a novel ``electro-diffusion'' mechanism of the ion species separation has been shown to exist. Equally important, in drastic contrast to the classical case of the neutral gas mixture, thermo-diffusion is predicted to be comparable to, or even much larger than, baro-diffusion. By employing the effective potential theory this formalism has then been generalized to the case of a moderately coupled plasma with multiple ion species, making it applicable to the problem of mix at the shell/fuel interface in ICF implosion. Second, distribution function for the energetic ions has been found from first principles and the fusion reactivity reduction has been calculated for hot-spot relevant conditions. A technique for approximate evaluation of the distribution function has been identified. This finding suggests a path to effectively introducing the tail modification effects into mainline rad-hydro codes, while being in good agreement with the first principle based solution. This work was partially supported by the Laboratory Directed Research and Development (LDRD) program of LANL.

  10. Differentiation of Low- and High-Grade Pediatric Brain Tumors with High b-Value Diffusion-weighted MR Imaging and a Fractional Order Calculus Model

    PubMed Central

    Sui, Yi; Wang, He; Liu, Guanzhong; Damen, Frederick W.; Wanamaker, Christian; Li, Yuhua

    2015-01-01

    Purpose To demonstrate that a new set of parameters (D, β, and μ) from a fractional order calculus (FROC) diffusion model can be used to improve the accuracy of MR imaging for differentiating among low- and high-grade pediatric brain tumors. Materials and Methods The institutional review board of the performing hospital approved this study, and written informed consent was obtained from the legal guardians of pediatric patients. Multi-b-value diffusion-weighted magnetic resonance (MR) imaging was performed in 67 pediatric patients with brain tumors. Diffusion coefficient D, fractional order parameter β (which correlates with tissue heterogeneity), and a microstructural quantity μ were calculated by fitting the multi-b-value diffusion-weighted images to an FROC model. D, β, and μ values were measured in solid tumor regions, as well as in normal-appearing gray matter as a control. These values were compared between the low- and high-grade tumor groups by using the Mann-Whitney U test. The performance of FROC parameters for differentiating among patient groups was evaluated with receiver operating characteristic (ROC) analysis. Results None of the FROC parameters exhibited significant differences in normal-appearing gray matter (P ≥ .24), but all showed a significant difference (P < .002) between low- (D, 1.53 μm2/msec ± 0.47; β, 0.87 ± 0.06; μ, 8.67 μm ± 0.95) and high-grade (D, 0.86 μm2/msec ± 0.23; β, 0.73 ± 0.06; μ, 7.8 μm ± 0.70) brain tumor groups. The combination of D and β produced the largest area under the ROC curve (0.962) in the ROC analysis compared with individual parameters (β, 0.943; D,0.910; and μ, 0.763), indicating an improved performance for tumor differentiation. Conclusion The FROC parameters can be used to differentiate between low- and high-grade pediatric brain tumor groups. The combination of FROC parameters or individual parameters may serve as in vivo, noninvasive, and quantitative imaging markers for classifying pediatric brain tumors. © RSNA, 2015 PMID:26035586

  11. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  12. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE PAGES

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin; ...

    2017-07-08

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  13. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  14. UFO: A THREE-DIMENSIONAL NEUTRON DIFFUSION CODE FOR THE IBM 704

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auerbach, E.H.; Jewett, J.P.; Ketchum, M.A.

    A description of UFO, a code for the solution of the fewgroup neutron diffusion equation in three-dimensional Cartesian coordinates on the IBM 704, is given. An accelerated Liebmann flux iteration scheme is used, and optimum parameters can be calculated by the code whenever they are required. The theory and operation of the program are discussed. (auth)

  15. Terahertz wave manipulation based on multi-bit coding artificial electromagnetic surfaces

    NASA Astrophysics Data System (ADS)

    Li, Jiu-Sheng; Zhao, Ze-Jiang; Yao, Jian-Quan

    2018-05-01

    A polarization insensitive multi-bit coding artificial electromagnetic surface is proposed for terahertz wave manipulation. The coding artificial electromagnetic surfaces composed of four-arrow-shaped particles with certain coding sequences can generate multi-bit coding in the terahertz frequencies and manipulate the reflected terahertz waves to the numerous directions by using of different coding distributions. Furthermore, we demonstrate that our coding artificial electromagnetic surfaces have strong abilities to reduce the radar cross section with polarization insensitive for TE and TM incident terahertz waves as well as linear-polarized and circular-polarized terahertz waves. This work offers an effectively strategy to realize more powerful manipulation of terahertz wave.

  16. New schemes for internally contracted multi-reference configuration interaction

    NASA Astrophysics Data System (ADS)

    Wang, Yubin; Han, Huixian; Lei, Yibo; Suo, Bingbing; Zhu, Haiyan; Song, Qi; Wen, Zhenyi

    2014-10-01

    In this work we present a new internally contracted multi-reference configuration interaction (MRCI) scheme by applying the graphical unitary group approach and the hole-particle symmetry. The latter allows a Distinct Row Table (DRT) to split into a number of sub-DRTs in the active space. In the new scheme a contraction is defined as a linear combination of arcs within a sub-DRT, and connected to the head and tail of the DRT through up-steps and down-steps to generate internally contracted configuration functions. The new scheme deals with the closed-shell (hole) orbitals and external orbitals in the same manner and thus greatly simplifies calculations of coupling coefficients and CI matrix elements. As a result, the number of internal orbitals is no longer a bottleneck of MRCI calculations. The validity and efficiency of the new ic-MRCI code are tested by comparing with the corresponding WK code of the MOLPRO package. The energies obtained from the two codes are essentially identical, and the computational efficiencies of the two codes have their own advantages.

  17. Method and apparatus for reading lased bar codes on shiny-finished fuel rod cladding tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldenfield, M.P.; Lambert, D.V.

    1990-10-02

    This patent describes, in a nuclear fuel rod identification system, a method of reading a bar code etched directly on a surface of a nuclear fuel rod. It comprises: defining a pair of light diffuser surfaces adjacent one another but in oppositely inclined relation to a beam of light emitted from a light reader; positioning a fuel rod, having a cylindrical surface portion with a bar code etched directly thereon, relative to the light diffuser surfaces such that the surfaces are disposed adjacent to and in oppositely inclined relation along opposite sides of the fuel rod surface portion and themore » fuel rod surface portion is aligned with the beam of light emitted from the light reader; directing the beam of light on the bar code on fuel rod cylindrical surface portion such that the light is reflected therefrom onto one of the light diffuser surfaces; and receiving and reading the reflected light from the bar code via the one of the light diffuser surfaces to the light reader.« less

  18. A numerical simulation of the flow in the diffuser of the NASA Lewis icing research tunnel

    NASA Technical Reports Server (NTRS)

    Addy, Harold E., Jr.; Keith, Theo G., Jr.

    1990-01-01

    The flow in the diffuser section of the Icing Research Tunnel at the NASA Lewis Research Center is numerically investigated. To accomplish this, an existing computer code is utilized. The code, known as PARC3D, is based on the Beam-Warming algorithm applied to the strong conservation law form of the complete Navier-Stokes equations. The first portion of the paper consists of a brief description of the diffuser and its current flow characteristics. A brief discussion of the code work follows. Predicted velocity patterns are then compared with the measured values.

  19. Diffusive deposition of aerosols in Phebus containment during FPT-2 test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kontautas, A.; Urbonavicius, E.

    2012-07-01

    At present the lumped-parameter codes is the main tool to investigate the complex response of the containment of Nuclear Power Plant in case of an accident. Continuous development and validation of the codes is required to perform realistic investigation of the processes that determine the possible source term of radioactive products to the environment. Validation of the codes is based on the comparison of the calculated results with the measurements performed in experimental facilities. The most extensive experimental program to investigate fission product release from the molten fuel, transport through the cooling circuit and deposition in the containment is performedmore » in PHEBUS test facility. Test FPT-2 performed in this facility is considered for analysis of processes taking place in containment. Earlier performed investigations using COCOSYS code showed that the code could be successfully used for analysis of thermal-hydraulic processes and deposition of aerosols, but there was also noticed that diffusive deposition on the vertical walls does not fit well with the measured results. In the CPA module of ASTEC code there is implemented different model for diffusive deposition, therefore the PHEBUS containment model was transferred from COCOSYS code to ASTEC-CPA to investigate the influence of the diffusive deposition modelling. Analysis was performed using PHEBUS containment model of 16 nodes. The calculated thermal-hydraulic parameters are in good agreement with measured results, which gives basis for realistic simulation of aerosol transport and deposition processes. Performed investigations showed that diffusive deposition model has influence on the aerosol deposition distribution on different surfaces in the test facility. (authors)« less

  20. Measurement equivalence: A non-technical primer on categorical multi-group confirmatory factor analysis in school psychology.

    PubMed

    Pendergast, Laura L; von der Embse, Nathaniel; Kilgus, Stephen P; Eklund, Katie R

    2017-02-01

    Evidence-based interventions (EBIs) have become a central component of school psychology research and practice, but EBIs are dependent upon the availability and use of evidence-based assessments (EBAs) with diverse student populations. Multi-group confirmatory factor analysis (MG-CFA) is an analytical tool that can be used to examine the validity and measurement equivalence/invariance of scores across diverse groups. The objective of this article is to provide a conceptual and procedural overview of categorical MG-CFA, as well as an illustrated example based on data from the Social and Academic Behavior Risk Screener (SABRS) - a tool designed for use in school-based interventions. This article serves as a non-technical primer on the topic of MG-CFA with ordinal (rating scale) data and does so through the framework of examining equivalence of measures used for EBIs within multi-tiered models - an understudied topic. To go along with the illustrated example, we have provided supplementary files that include sample data, Mplus input code, and an annotated guide for understanding the input code (http://dx.doi.org/10.1016/j.jsp.2016.11.002). Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)

    NASA Technical Reports Server (NTRS)

    Kelly, J. J.; Abu-Khajeel, H.

    1997-01-01

    This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.

  2. Error diffusion concept for multi-level quantization

    NASA Astrophysics Data System (ADS)

    Broja, Manfred; Michalowski, Kristina; Bryngdahl, Olof

    1990-11-01

    The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.

  3. A velocity-dependent anomalous radial transport model for (2-D, 2-V) kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, Kowsik; Krasheninnikov, Sergei; Cohen, Ron; Rognlien, Tom

    2008-11-01

    Plasma turbulence constitutes a significant part of radial plasma transport in magnetically confined plasmas. This turbulent transport is modeled in the form of anomalous convection and diffusion coefficients in fluid transport codes. There is a need to model the same in continuum kinetic edge codes [such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory] with non-Maxwellian distributions. We present an anomalous transport model with velocity-dependent convection and diffusion coefficients leading to a diagonal transport matrix similar to that used in contemporary fluid transport models (e.g., UEDGE). Also presented are results of simulations corresponding to radial transport due to long-wavelength ExB turbulence using a velocity-independent diffusion coefficient. A BGK collision model is used to enable comparison with fluid transport codes.

  4. Simulation Study of Structure and Properties of Plasma Liners for the PLX- α Project

    NASA Astrophysics Data System (ADS)

    Samulyak, Roman; Shih, Wen; Hsu, Scott; PLX-Alpha Team

    2017-10-01

    Detailed numerical studies of the propagation and merger of high-Mach-number plasma jets and the formation and implosion of plasma liners have been performed using the FronTier code in support of the Plasma Liner Experiment-ALPHA (PLX- α) project. Physics models include radiation, physical diffusion, plasma-EOS models, and an anisotropic diffusion model that mimics deviations from fully collisional hydrodynamics in outer layers of plasma jets. Detailed structure and non-uniformity of plasma liners of due to primary and secondary shock waves have been studies as well as averaged quantities of ram pressure and Mach number. Synthetic data from simulations have been compared with available experimental data from a multi-chord interferometer and survey and high-resolution spectrometers. Numerical studies of the sensitivity of liner properties to experimental errors in the initial masses of jets and the synchronization of plasma gun valves have also been performed. Supported by the ARPA-E ALPHA program.

  5. A multigroup radiation diffusion test problem: Comparison of code results with analytic solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, A I; Harte, J A; Bolstad, J H

    2006-12-21

    We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.

  6. Exact Magnetic Diffusion Solutions for Magnetohydrodynamic Code Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D S

    In this paper, the authors present several new exact analytic space and time dependent solutions to the problem of magnetic diffusion in R-Z geometry. These problems serve to verify several different elements of an MHD implementation: magnetic diffusion, external circuit time integration, current and voltage energy sources, spatially dependent conductivities, and ohmic heating. The exact solutions are shown in comparison with 2D simulation results from the Ares code.

  7. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are required. Nature of problem: In inertial confinement fusion and related experiments with lasers and particle beams, energy transport by thermal radiation becomes important. Under these conditions, the radiation field strongly interacts with the hydrodynamic motion through emission and absorption processes. Solution method: The equations of radiation transfer coupled with Lagrangian hydrodynamics, heat diffusion and beam tracing (laser or ions) are solved, in two-dimensional axial-symmetric geometry ( R-Z coordinates) using a fractional step scheme. Radiation transfer is solved with angular resolution. Matter properties are either interpolated from tables (equations-of-state and opacities) or computed by user routines (conductivities and beam attenuation). Restrictions: The code has been designed for typical conditions prevailing in inertial confinement fusion (ns time scale, matter states close to local thermodynamical equilibrium, negligible radiation pressure, …). Although a wider range of situations can be treated, extrapolations to regions beyond this design range need special care. Unusual features: A special computer language, called r94, is used at top levels of the code. These parts have to be converted to standard C by a translation program (supplied as part of the package). Due to the complexity of code (hydro-code, grid generation, user interface, graphic post-processor, translator program, installation scripts) extensive manuals are supplied as part of the package. Running time: 567 seconds for the example supplied.

  8. Simulation studies of chemical erosion on carbon based materials at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.

    1999-06-01

    We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.

  9. Aerolization During Boron Nanoparticle Multi-Component Fuel Group Burning Studies

    DTIC Science & Technology

    2014-02-03

    Anderson, University of Utah). …………………… 14 Figure 2. Photograph of group burning facility showing benchtop flat flame burner unit with injector nozzle ...and (B) aerosol generator. 16 Figure 6. Diagram of benchtop flat flame burner unit showing injector nozzle assembly with VOAG orifice, fuel and...translation stage, variable fuel and gas supply rates, and injector nozzles that can be configured to investigate diffusion and premixed flames (Fig. 2 & 3

  10. Development of Numerical Tools for the Investigation of Plasma Detachment from Magnetic Nozzles

    NASA Technical Reports Server (NTRS)

    Sankaran, Kamesh; Polzin, Kurt A.

    2007-01-01

    A multidimensional numerical simulation framework aimed at investigating the process of plasma detachment from a magnetic nozzle is introduced. An existing numerical code based on a magnetohydrodynamic formulation of the plasma flow equations that accounts for various dispersive and dissipative processes in plasmas was significantly enhanced to allow for the modeling of axisymmetric domains containing three.dimensiunai momentum and magnetic flux vectors. A separate magnetostatic solver was used to simulate the applied magnetic field topologies found in various nozzle experiments. Numerical results from a magnetic diffusion test problem in which all three components of the magnetic field were present exhibit excellent quantitative agreement with the analytical solution, and the lack of numerical instabilities due to fluctuations in the value of del(raised dot)B indicate that the conservative MHD framework with dissipative effects is well-suited for multi-dimensional analysis of magnetic nozzles. Further studies will focus on modeling literature experiments both for the purpose of code validation and to extract physical insight regarding the mechanisms driving detachment.

  11. Spatially-Dependent Modelling of Pulsar Wind Nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-03-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multi-zone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially-dependent B-field, spatially-dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  12. Quantum Monte Carlo Endstation for Petascale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlomore » code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13 published papers, 15 invited talks and lectures nationally and internationally. My former graduate student and postdoc Dr. Michal Bajdich, who was supported byt this grant, is currently a postdoc with ORNL in the group of Dr. F. Reboredo and Dr. P. Kent and is using the developed tools in a number of DOE projects. The QWalk package has become a truly important research tool used by the electronic structure community and has attracted several new developers in other research groups. Our tools use several types of correlated wavefunction approaches, variational, diffusion and reptation methods, large-scale optimization methods for wavefunctions and enables to calculate energy differences such as cohesion, electronic gaps, but also densities and other properties, using multiple runs one can obtain equations of state for given structures and beyond. Our codes use efficient numerical and Monte Carlo strategies (high accuracy numerical orbitals, multi-reference wave functions, highly accurate correlation factors, pairing orbitals, force biased and correlated sampling Monte Carlo), are robustly parallelized and enable to run on tens of thousands cores very efficiently. Our demonstration applications were focused on the challenging research problems in several fields of materials science such as transition metal solids. We note that our study of FeO solid was the first QMC calculation of transition metal oxides at high pressures.« less

  13. Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

    2002-01-01

    A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

  14. Software engineering and automatic continuous verification of scientific software

    NASA Astrophysics Data System (ADS)

    Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.

    2011-12-01

    Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.

  15. Evaluation of Cueing Innovation for Pressure Ulcer Prevention Using Staff Focus Groups.

    PubMed

    Yap, Tracey L; Kennerly, Susan; Corazzini, Kirsten; Porter, Kristie; Toles, Mark; Anderson, Ruth A

    2014-07-25

    The purpose of the manuscript is to describe long-term care (LTC) staff perceptions of a music cueing intervention designed to improve staff integration of pressure ulcer (PrU) prevention guidelines regarding consistent and regular movement of LTC residents a minimum of every two hours. The Diffusion of Innovation (DOI) model guided staff interviews about their perceptions of the intervention's characteristics, outcomes, and sustainability. This was a qualitative, observational study of staff perceptions of the PrU prevention intervention conducted in Midwestern U.S. LTC facilities (N = 45 staff members). One focus group was held in each of eight intervention facilities using a semi-structured interview protocol. Transcripts were analyzed using thematic content analysis, and summaries for each category were compared across groups. The a priori codes (observability, trialability, compatibility, relative advantage and complexity) described the innovation characteristics, and the sixth code, sustainability, was identified in the data. Within each code, two themes emerged as a positive or negative response regarding characteristics of the innovation. Moreover, within the sustainability code, a third theme emerged that was labeled "brainstormed ideas", focusing on strategies for improving the innovation. Cueing LTC staff using music offers a sustainable potential to improve PrU prevention practices, to increase resident movement, which can subsequently lead to a reduction in PrUs.

  16. Chemical ageing and transformation of diffusivity in semi-solid multi-component organic aerosol particles

    NASA Astrophysics Data System (ADS)

    Pfrang, C.; Shiraiwa, M.; Pöschl, U.

    2011-04-01

    Recent experimental evidence underlines the importance of reduced diffusivity in amorphous semi-solid or glassy atmospheric aerosols. This paper investigates the impact of diffusivity on the ageing of multi-component reactive organic particles representative of atmospheric cooking aerosols. We apply and extend the recently developed KM-SUB model in a study of a 12-component mixture containing oleic and palmitoleic acids. We demonstrate that changes in the diffusivity may explain the evolution of chemical loss rates in ageing semi-solid particles, and we resolve surface and bulk processes under transient reaction conditions considering diffusivities altered by oligomerisation. This new model treatment allows prediction of the ageing of mixed organic multi-component aerosols over atmospherically relevant time scales and conditions. We illustrate the impact of changing diffusivity on the chemical half-life of reactive components in semi-solid particles, and we demonstrate how solidification and crust formation at the particle surface can affect the chemical transformation of organic aerosols.

  17. Chemical ageing and transformation of diffusivity in semi-solid multi-component organic aerosol particles

    NASA Astrophysics Data System (ADS)

    Pfrang, C.; Shiraiwa, M.; Pöschl, U.

    2011-07-01

    Recent experimental evidence underlines the importance of reduced diffusivity in amorphous semi-solid or glassy atmospheric aerosols. This paper investigates the impact of diffusivity on the ageing of multi-component reactive organic particles approximating atmospheric cooking aerosols. We apply and extend the recently developed KM-SUB model in a study of a 12-component mixture containing oleic and palmitoleic acids. We demonstrate that changes in the diffusivity may explain the evolution of chemical loss rates in ageing semi-solid particles, and we resolve surface and bulk processes under transient reaction conditions considering diffusivities altered by oligomerisation. This new model treatment allows prediction of the ageing of mixed organic multi-component aerosols over atmospherically relevant timescales and conditions. We illustrate the impact of changing diffusivity on the chemical half-life of reactive components in semi-solid particles, and we demonstrate how solidification and crust formation at the particle surface can affect the chemical transformation of organic aerosols.

  18. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  19. Multi-Algorithm Particle Simulations with Spatiocyte.

    PubMed

    Arjunan, Satya N V; Takahashi, Koichi

    2017-01-01

    As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .

  20. CoFlame: A refined and validated numerical algorithm for modeling sooting laminar coflow diffusion flames

    NASA Astrophysics Data System (ADS)

    Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.

    2016-10-01

    Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.

  1. Influences on day-to-day self-management of type 2 diabetes among African-American women: spirituality, the multi-caregiver role, and other social context factors.

    PubMed

    Samuel-Hodge, C D; Headen, S W; Skelly, A H; Ingram, A F; Keyserling, T C; Jackson, E J; Ammerman, A S; Elasy, T A

    2000-07-01

    Many African-American women are affected by diabetes and its complications, and culturally appropriate lifestyle interventions that lead to improvements in glycemic control are urgently needed. The aim of this qualitative study was to identify culturally relevant psychosocial issues and social context variables influencing lifestyle behaviors--specifically diet and physical activity--of southern African-American women with diabetes. We conducted 10 focus group interviews with 70 southern African-American women with type 2 diabetes. Group interviews were audiotaped and transcripts were coded using qualitative data analysis software. A panel of reviewers analyzed the coded responses for emerging themes and trends. The dominant and most consistent themes that emerged from these focus groups were 1) spirituality as an important factor in general health, disease adjustment, and coping; 2) general life stress and multi-caregiving responsibilities interfering with daily disease management; and 3) the impact of diabetes manifested in feelings of dietary deprivation, physical and emotional "tiredness," "worry," and fear of diabetes complications. Our findings suggest that influences on diabetes self-management behaviors of African-American women may be best understood from a sociocultural and family context. Interventions to improve self-management for this population should recognize the influences of spirituality, general life stress, multi-caregiving responsibilities, and the psychological impact of diabetes. These findings suggest that family-centered and church-based approaches to diabetes care interventions are appropriate.

  2. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  3. The three-zone composite productivity model for a multi-fractured horizontal shale gas well

    NASA Astrophysics Data System (ADS)

    Qi, Qian; Zhu, Weiyao

    2018-02-01

    Due to the nano-micro pore structures and the massive multi-stage multi-cluster hydraulic fracturing in shale gas reservoirs, the multi-scale seepage flows are much more complicated than in most other conventional reservoirs, and are crucial for the economic development of shale gas. In this study, a new multi-scale non-linear flow model was established and simplified, based on different diffusion and slip correction coefficients. Due to the fact that different flow laws existed between the fracture network and matrix zone, a three-zone composite model was proposed. Then, according to the conformal transformation combined with the law of equivalent percolation resistance, the productivity equation of a horizontal fractured well, with consideration given to diffusion, slip, desorption, and absorption, was built. Also, an analytic solution was derived, and the interference of the multi-cluster fractures was analyzed. The results indicated that the diffusion of the shale gas was mainly in the transition and Fick diffusion regions. The matrix permeability was found to be influenced by slippage and diffusion, which was determined by the pore pressure and diameter according to the Knudsen number. It was determined that, with the increased half-lengths of the fracture clusters, flow conductivity of the fractures, and permeability of the fracture network, the productivity of the fractured well also increased. Meanwhile, with the increased number of fractures, the distance between the fractures decreased, and the productivity slowly increased due to the mutual interfere of the fractures.

  4. Incidence of Pulmonary Disease in Inflammatory Bowel Disease

    DTIC Science & Technology

    2017-03-30

    were conducted according to the principles set forth in the National Institute of Health Publication No. 80·23, Guide for the Care and Use of...Multi- Market . This was used as the study group and was cross referenced by the SAMMC HCO for the ICD-9 codes of pulmonary diagnoses (See table 1 ). The

  5. A Sparse Bayesian Learning Algorithm for White Matter Parameter Estimation from Compressed Multi-shell Diffusion MRI.

    PubMed

    Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2017-09-01

    We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.

  6. Quenching star formation with quasar outflows launched by trapped IR radiation

    NASA Astrophysics Data System (ADS)

    Costa, Tiago; Rosdahl, Joakim; Sijacki, Debora; Haehnelt, Martin G.

    2018-06-01

    We present cosmological radiation-hydrodynamic simulations, performed with the code RAMSES-RT, of radiatively-driven outflows in a massive quasar host halo at z = 6. Our simulations include both single- and multi-scattered radiation pressure on dust from a quasar and are compared against simulations performed with thermal feedback. For radiation pressure-driving, we show that there is a critical quasar luminosity above which a galactic outflow is launched, set by the equilibrium of gravitational and radiation forces. While this critical luminosity is unrealistically high in the single-scattering limit for plausible black hole masses, it is in line with a ≈ 3 × 10^9 M_⊙ black hole accreting at its Eddington limit, if infrared (IR) multi-scattering radiation pressure is included. The outflows are fast (v ≳ 1000 km s^{-1}) and strongly mass-loaded with peak mass outflow rates ≈ 10^3 - 10^4 M_⊙ yr^{-1}, but short-lived (< 10 Myr). Outflowing material is multi-phase, though predominantly composed of cool gas, forming via a thermal instability in the shocked swept-up component. Radiation pressure- and thermally-driven outflows both affect their host galaxies significantly, but in different, complementary ways. Thermally-driven outflows couple more efficiently to diffuse halo gas, generating more powerful, hotter and more volume-filling outflows. IR radiation, through its ability to penetrate dense gas via diffusion, is more efficient at ejecting gas from the bulge. The combination of gas ejection through outflows with internal pressurisation by trapped IR radiation leads to a complete shut down of star formation in the bulge. We hence argue that radiation pressure-driven feedback may be an important ingredient in regulating star formation in compact starbursts, especially during the quasar's `obscured' phase.

  7. White matter microstructure and volitional motor activity in schizophrenia: A diffusion kurtosis imaging study.

    PubMed

    Docx, Lise; Emsell, Louise; Van Hecke, Wim; De Bondt, Timo; Parizel, Paul M; Sabbe, Bernard; Morrens, Manuel

    2017-02-28

    Avolition is a core feature of schizophrenia and may arise from altered brain connectivity. Here we used diffusion kurtosis imaging (DKI) to investigate the association between white matter (WM) microstructure and volitional motor activity. Multi-shell diffusion MRI and 24-h actigraphy data were obtained from 20 right-handed patients with schizophrenia and 16 right-handed age and gender matched healthy controls. We examined correlations between fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK), and motor activity level, as well as group differences in these measures. In the patient group, increasing motor activity level was positively correlated with MK in the inferior, medial and superior longitudinal fasciculus, the corpus callosum, the posterior fronto-occipital fasciculus and the posterior cingulum. This association was not found in control subjects or in DTI measures. These results show that a lack of volitional motor activity in schizophrenia is associated with potentially altered WM microstructure in posterior brain regions associated with cognitive function and motivation. This could reflect both illness related dysconnectivity which through altered cognition, manifests as reduced volitional motor activity, and/or the effects of reduced physical activity on brain WM. Copyright © 2016. Published by Elsevier B.V.

  8. Multi-Group Formulation of the Temperature-Dependent Resonance Scattering Model and its Impact on Reactor Core Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed

    2014-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less

  9. Multi-site genetic analysis of diffusion images and voxelwise heritability analysis: A pilot project of the ENIGMA–DTI working group

    PubMed Central

    Jahanshad, Neda; Kochunov, Peter; Sprooten, Emma; Mandl, René C.; Nichols, Thomas E.; Almassy, Laura; Blangero, John; Brouwer, Rachel M.; Curran, Joanne E.; de Zubicaray, Greig I.; Duggirala, Ravi; Fox, Peter T.; Hong, L. Elliot; Landman, Bennett A.; Martin, Nicholas G.; McMahon, Katie L.; Medland, Sarah E.; Mitchell, Braxton D.; Olvera, Rene L.; Peterson, Charles P.; Starr, John M.; Sussmann, Jessika E.; Toga, Arthur W.; Wardlaw, Joanna M.; Wright, Margaret J.; Hulshoff Pol, Hilleke E.; Bastin, Mark E.; McIntosh, Andrew M.; Deary, Ian J.; Thompson, Paul M.; Glahn, David C.

    2013-01-01

    The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA–DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18–85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/). PMID:23629049

  10. A new Monte Carlo code for light transport in biological tissue.

    PubMed

    Torres-García, Eugenio; Oros-Pantoja, Rigoberto; Aranda-Lara, Liliana; Vieyra-Reyes, Patricia

    2018-04-01

    The aim of this work was to develop an event-by-event Monte Carlo code for light transport (called MCLTmx) to identify and quantify ballistic, diffuse, and absorbed photons, as well as their interaction coordinates inside the biological tissue. The mean free path length was computed between two interactions for scattering or absorption processes, and if necessary scatter angles were calculated, until the photon disappeared or went out of region of interest. A three-layer array (air-tissue-air) was used, forming a semi-infinite sandwich. The light source was placed at (0,0,0), emitting towards (0,0,1). The input data were: refractive indices, target thickness (0.02, 0.05, 0.1, 0.5, and 1 cm), number of particle histories, and λ from which the code calculated: anisotropy, scattering, and absorption coefficients. Validation presents differences less than 0.1% compared with that reported in the literature. The MCLTmx code discriminates between ballistic and diffuse photons, and inside of biological tissue, it calculates: specular reflection, diffuse reflection, ballistics transmission, diffuse transmission and absorption, and all parameters dependent on wavelength and thickness. The MCLTmx code can be useful for light transport inside any medium by changing the parameters that describe the new medium: anisotropy, dispersion and attenuation coefficients, and refractive indices for specific wavelength.

  11. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  12. On codes with multi-level error-correction capabilities

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1987-01-01

    In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.

  13. Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks

    PubMed Central

    Kim, Deokho; Park, Karam; Ro, Won W.

    2011-01-01

    While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053

  14. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM.

    PubMed

    Singh, Brajesh K; Srivastava, Vineet K

    2015-04-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.

  15. Approximate series solution of multi-dimensional, time fractional-order (heat-like) diffusion equations using FRDTM

    PubMed Central

    Singh, Brajesh K.; Srivastava, Vineet K.

    2015-01-01

    The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639

  16. Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans Gougar

    2010-07-01

    ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less

  17. Isotopic dependence of fusion enhancement of various heavy ion systems using energy dependent Woods-Saxon potential

    NASA Astrophysics Data System (ADS)

    Gautam, Manjeet Singh

    2015-01-01

    In the present work, the fusion of symmetric and asymmetric projectile-target combinations are deeply analyzed within the framework of energy dependent Woods-Saxon potential model (EDWSP model) in conjunction with one dimensional Wong formula and the coupled channel code CCFULL. The neutron transfer channels and the inelastic surface excitations of collision partners are dominating mode of couplings and the coupling of relative motion of colliding nuclei to such relevant internal degrees of freedom produces a significant fusion enhancement at sub-barrier energies. It is quite interesting that the effects of dominant intrinsic degrees of freedom such as multi-phonon vibrational states, neutron transfer channels and proton transfer channels can be simulated by introducing the energy dependence in the nucleus-nucleus potential (EDWSP model). In the EDWSP model calculations, a wide range of diffuseness parameter ranging from a = 0.85 fm to a = 0.97 fm, which is much larger than a value (a = 0.65 fm) extracted from the elastic scattering data, is needed to reproduce sub-barrier fusion data. However, such diffuseness anomaly, which might be an artifact of some dynamical effects, has been resolved by trajectory fluctuation dissipation (TFD) model wherein the resulting nucleus-nucleus potential possesses normal diffuseness parameter.

  18. Computational Study of Poloidal Angular Momentum Transport in DIII-D

    NASA Astrophysics Data System (ADS)

    Pankin, Alexei; Kruger, Scott; Kritz, Arnold; Rafiq, Tariq; Weiland, Jan

    2013-10-01

    The new Multi-Mode Model, MMM8.1, includes the capability to predict the anomalous poloidal momentum diffusivity [T. Rafiq et al., Phys. Plasmas 20, 032506 (2013)]. It is important to consider the effect of this diffusivity on the poloidal rotation of tokamak plasmas since some experimental observations suggest that neoclassical effects are not always sufficient to explain the observed poloidal rotation [B.A. Grierson et al., Phys. Plasmas 19, 056107 (2012)]. One of the objectives of this research is to determine if the anomalous contribution to the poloidal rotation can be significant in the regions of internal transport barriers (ITBs). In this study, the MMM8.1 model is used to compute the poloidal momentum diffusivity for a range of plasma parameters that correspond to the parameters that occur in DIII-D discharges. The parameters that are considered include the temperature and density gradients, and magnetic shear. The role of anomalous poloidal transport in the possible poloidal spin up in the ITB regions is discussed. Progress in the implementation of poloidal transport equations in the ASTRA transport code is reported and initial predictive simulation results for the poloidal rotation profiles are presented. This research is partially support by the DOE Grants DE-SC0006629 and DE-FG02-92ER54141.

  19. Mega-Scale Simulation of Multi-Layer Devices-- Formulation, Kinetics, and Visualization

    DTIC Science & Technology

    1994-07-28

    prototype code STRIDE, also initially developed under ARO support. The focus of the ARO supported research activities has been in the areas of multi ... FORTRAN -77. During its fifteen-year life- span several generations of researchers have modified the code . Due to this continual develop- ment, the...behavior. The replacement of the linear solver had no effect on the remainder of the code . We replaced the existing solver with a distributed multi -frontal

  20. Multi-channel photon counting DOT system based on digital lock-in detection technique

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng

    2011-02-01

    Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.

  1. A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes

    NASA Astrophysics Data System (ADS)

    Schurtz, Guy

    2000-10-01

    Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.

  2. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    PubMed

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Attempt to model laboratory-scale diffusion and retardation data.

    PubMed

    Hölttä, P; Siitari-Kauppi, M; Hakanen, M; Tukiainen, V

    2001-02-01

    Different approaches for measuring the interaction between radionuclides and rock matrix are needed to test the compatibility of experimental retardation parameters and transport models used in assessing the safety of the underground repositories for the spent nuclear fuel. In this work, the retardation of sodium, calcium and strontium was studied on mica gneiss, unaltered, moderately altered and strongly altered tonalite using dynamic fracture column method. In-diffusion of calcium into rock cubes was determined to predict retardation in columns. In-diffusion of calcium into moderately and strongly altered tonalite was interpreted using a numerical code FTRANS. The code was able to interprete in-diffusion of weakly sorbing calcium into the saturated porous matrix. Elution curves of calcium for the moderately and strongly altered tonalite fracture columns were explained adequately using FTRANS code and parameters obtained from in-diffusion calculations. In this paper, mass distribution ratio values of sodium, calcium and strontium for intact rock are compared to values, previously obtained for crushed rock from batch and crushed rock column experiments. Kd values obtained from fracture column experiments were one order of magnitude lower than Kd values from batch experiments.

  4. Multi-site study of diffusion metric variability: effects of site, vendor, field strength, and echo time on regions-of-interest and histogram-bin analyses.

    PubMed

    Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S

    2016-02-27

    It is now common for magnetic-resonance-imaging (MRI) based multi-site trials to include diffusion-weighted imaging (DWI) as part of the protocol. It is also common for these sites to possess MR scanners of different manufacturers, different software and hardware, and different software licenses. These differences mean that scanners may not be able to acquire data with the same number of gradient amplitude values and number of available gradient directions. Variability can also occur in achievable b-values and minimum echo times. The challenge of a multi-site study then, is to create a common protocol by understanding and then minimizing the effects of scanner variability and identifying reliable and accurate diffusion metrics. This study describes the effect of site, scanner vendor, field strength, and TE on two diffusion metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA) using two common analyses (region-of-interest and mean-bin value of whole brain histograms). The goal of the study was to identify sources of variability in diffusion-sensitized imaging and their influence on commonly reported metrics. The results demonstrate that the site, vendor, field strength, and echo time all contribute to variability in FA and MD, though to different extent. We conclude that characterization of the variability of DTI metrics due to site, vendor, field strength, and echo time is a worthwhile step in the construction of multi-center trials.

  5. Short-scan-time multi-slice diffusion MRI of the mouse cervical spinal cord using echo planar imaging.

    PubMed

    Callot, Virginie; Duhamel, Guillaume; Cozzone, Patrick J; Kober, Frank

    2008-10-01

    Mouse spinal cord (SC) diffusion-weighted imaging (DWI) provides important information on tissue morphology and structural changes that may occur during pathologies such as multiple sclerosis or SC injury. The acquisition scheme of the commonly used DWI techniques is based on conventional spin-echo encoding, which is time-consuming. The purpose of this work was to investigate whether the use of echo planar imaging (EPI) would provide good-quality diffusion MR images of mouse SC, as well as accurate measurements of diffusion-derived metrics, and thus enable diffusion tensor imaging (DTI) and highly resolved DWI within reasonable scan times. A four-shot diffusion-weighted spin-echo EPI (SE-EPI) sequence was evaluated at 11.75 T on a group of healthy mice (n = 10). SE-EPI-derived apparent diffusion coefficients of gray and white matter were compared with those obtained using a conventional spin-echo sequence (c-SE) to validate the accuracy of the method. To take advantage of the reduction in acquisition time offered by the EPI sequence, multi-slice DTI acquisitions were performed covering the cervical segments (six slices, six diffusion-encoding directions, three b values) within 30 min (vs 2 h for c-SE). From these measurements, fractional anisotropy and mean diffusivities were calculated, and fiber tracking along the C1 to C6 cervical segments was performed. In addition, high-resolution images (74 x 94 microm(2)) were acquired within 5 min per direction. Clear delineation of gray and white matter and identical apparent diffusion coefficient values were obtained, with a threefold reduction in acquisition time compared with c-SE. While overcoming the difficulties associated with high spatially and temporally resolved DTI measurements, the present SE-EPI approach permitted identification of reliable quantitative parameters with a reproducibility compatible with the detection of pathologies. The SE-EPI method may be particularly valuable when multiple sets of images from the SC are needed, in cases of rapidly evolving conditions, to decrease the duration of anesthesia or to improve MR exploration by including additional MR measurements. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. Multi-layer light-weight protective coating and method for application

    NASA Technical Reports Server (NTRS)

    Wiedemann, Karl E. (Inventor); Clark, Ronald K. (Inventor); Taylor, Patrick J. (Inventor)

    1992-01-01

    A thin, light-weight, multi-layer coating is provided for protecting metals and their alloys from environmental attack at high temperatures. A reaction barrier is applied to the metal substrate and a diffusion barrier is then applied to the reaction barrier. A sealant layer may also be applied to the diffusion barrier if desired. The reaction barrier is either non-reactive or passivating with respect to the metal substrate and the diffusion barrier. The diffusion barrier is either non-reactive or passivating with respect to the reaction barrier and the sealant layer. The sealant layer is immiscible with the diffusion barrier and has a softening point below the expected use temperature of the metal.

  7. Fick's second law transformed: one path to cloaking in mass diffusion.

    PubMed

    Guenneau, S; Puvirajesinghe, T M

    2013-06-06

    Here, we adapt the concept of transformational thermodynamics, whereby the flux of temperature is controlled via anisotropic heterogeneous diffusivity, for the diffusion and transport of mass concentration. The n-dimensional, time-dependent, anisotropic heterogeneous Fick's equation is considered, which is a parabolic partial differential equation also applicable to heat diffusion, when convection occurs, for example, in fluids. This theory is illustrated with finite-element computations for a liposome particle surrounded by a cylindrical multi-layered cloak in a water-based environment, and for a spherical multi-layered cloak consisting of layers of fluid with an isotropic homogeneous diffusivity, deduced from an effective medium approach. Initial potential applications could be sought in bioengineering.

  8. Implicit SPH v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Parks, Michael L.; Perego, Mauro

    2016-11-09

    ISPH code is developed to solve multi-physics meso-scale flow problems using implicit SPH method. In particular, the code can provides solutions for incompressible, multi phase flow and electro-kinetic flows.

  9. Global solutions to a class of multi-species reaction-diffusion systems with cross-diffusions arising in population dynamics

    NASA Astrophysics Data System (ADS)

    Wen, Zijuan; Fu, Shengmao

    2009-08-01

    In this paper, an n-species strongly coupled cooperating diffusive system is considered in a bounded smooth domain, subject to homogeneous Neumann boundary conditions. Employing the method of energy estimates, we obtain some conditions on the diffusion matrix and inter-specific cooperatives to ensure the global existence and uniform boundedness of a nonnegative solution. The globally asymptotical stability of the constant positive steady state is also discussed. As a consequence, all the results hold true for multi-species Lotka-Volterra type competition model and prey-predator model.

  10. Metasurfaced Reverberation Chamber.

    PubMed

    Sun, Hengyi; Li, Zhuo; Gu, Changqing; Xu, Qian; Chen, Xinlei; Sun, Yunhe; Lu, Shengchen; Martin, Ferran

    2018-01-25

    The concept of metasurfaced reverberation chamber (RC) is introduced in this paper. It is shown that by coating the chamber wall with a rotating 1-bit random coding metasurface, it is possible to enlarge the test zone of the RC while maintaining the field uniformity as good as that in a traditional RC with mechanical stirrers. A 1-bit random coding diffusion metasurface is designed to obtain all-direction backscattering under normal incidence. Three specific cases are studied for comparisons, including a (traditional) mechanical stirrer RC, a mechanical stirrer RC with a fixed diffusion metasurface, and a RC with a rotating diffusion metasurface. Simulation results show that the compact rotating diffusion metasurface can act as a stirrer with good stirring efficiency. By using such rotating diffusion metasurface, the test region of the RC can be greatly extended.

  11. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  12. CFD Simulation on the J-2X Engine Exhaust in the Center-Body Diffuser and Spray Chamber at the B-2 Facility

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Wey, Thomas; Buehrle, Robert

    2009-01-01

    A computational fluid dynamic (CFD) code is used to simulate the J-2X engine exhaust in the center-body diffuser and spray chamber at the Spacecraft Propulsion Facility (B-2). The CFD code is named as the space-time conservation element and solution element (CESE) Euler solver and is very robust at shock capturing. The CESE results are compared with independent analysis results obtained by using the National Combustion Code (NCC) and show excellent agreement.

  13. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  14. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  15. Atomic-scale Modeling of the Structure and Dynamics of Dislocations in Complex Alloys at High Temperatures

    NASA Technical Reports Server (NTRS)

    Daw, Murray S.; Mills, Michael J.

    2003-01-01

    We report on the progress made during the first year of the project. Most of the progress at this point has been on the theoretical and computational side. Here are the highlights: (1) A new code, tailored for high-end desktop computing, now combines modern Accelerated Dynamics (AD) with the well-tested Embedded Atom Method (EAM); (2) The new Accelerated Dynamics allows the study of relatively slow, thermally-activated processes, such as diffusion, which are much too slow for traditional Molecular Dynamics; (3) We have benchmarked the new AD code on a rather simple and well-known process: vacancy diffusion in copper; and (4) We have begun application of the AD code to the diffusion of vacancies in ordered intermetallics.

  16. Numerical applications of the advective-diffusive codes for the inner magnetosphere

    NASA Astrophysics Data System (ADS)

    Aseev, N. A.; Shprits, Y. Y.; Drozdov, A. Y.; Kellerman, A. C.

    2016-11-01

    In this study we present analytical solutions for convection and diffusion equations. We gather here the analytical solutions for the one-dimensional convection equation, the two-dimensional convection problem, and the one- and two-dimensional diffusion equations. Using obtained analytical solutions, we test the four-dimensional Versatile Electron Radiation Belt code (the VERB-4D code), which solves the modified Fokker-Planck equation with additional convection terms. The ninth-order upwind numerical scheme for the one-dimensional convection equation shows much more accurate results than the results obtained with the third-order scheme. The universal limiter eliminates unphysical oscillations generated by high-order linear upwind schemes. Decrease in the space step leads to convergence of a numerical solution of the two-dimensional diffusion equation with mixed terms to the analytical solution. We compare the results of the third- and ninth-order schemes applied to magnetospheric convection modeling. The results show significant differences in electron fluxes near geostationary orbit when different numerical schemes are used.

  17. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  18. Multi-level of Fidelity Multi-Disciplinary Design Optimization of Small, Solid-Propellant Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Roshanian, Jafar; Jodei, Jahangir; Mirshams, Mehran; Ebrahimi, Reza; Mirzaee, Masood

    A new automated multi-level of fidelity Multi-Disciplinary Design Optimization (MDO) methodology has been developed at the MDO Laboratory of K.N. Toosi University of Technology. This paper explains a new design approach by formulation of developed disciplinary modules. A conceptual design for a small, solid-propellant launch vehicle was considered at two levels of fidelity structure. Low and medium level of fidelity disciplinary codes were developed and linked. Appropriate design and analysis codes were defined according to their effect on the conceptual design process. Simultaneous optimization of the launch vehicle was performed at the discipline level and system level. Propulsion, aerodynamics, structure and trajectory disciplinary codes were used. To reach the minimum launch weight, the Low LoF code first searches the whole design space to achieve the mission requirements. Then the medium LoF code receives the output of the low LoF and gives a value near the optimum launch weight with more details and higher fidelity.

  19. Modeling of tritium transport in a fusion reactor pin-type solid breeder blanket using the diffuse code

    NASA Astrophysics Data System (ADS)

    Martin, Rodger; Ghoniem, Nasr M.

    1986-11-01

    A pin-type fusion reactor blanket is designed using γ-LiAlO 2 solid tritium breeder. Tritium transport and diffusive inventory are modeled using the DIFFUSE code. Two approaches are used to obtain characteristic LiAlO 2 grain temperatures. DIFFUSE provides intragranular diffusive inventories which scale up to blanket size. These results compare well with a numerical analysis, giving a steady-state blanket tritium inventory of 13 g. Start-up transient inventories are modeled using DIFFUSE for both full and restricted coolant flow. Full flow gives rapid inventory buildup while restricted flow prevents this buildup. Inventories after shutdown are modeled: reduced cooling is found to have little effect on removing tritium, but preheating rapidly purges inventory. DIFFUSE provides parametric modeling of solid breeder density, radiation, and surface effects. 100% dense pins are found to give massive inventory and marginal tritium release. Only large trapping energies and concentrations significantly increase inventory. Diatomic surface recombination is only significant at high temperatures.

  20. Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST

    NASA Astrophysics Data System (ADS)

    Zhu, You-Liang; Pan, Deng; Li, Zhan-Wei; Liu, Hong; Qian, Hu-Jun; Zhao, Yang; Lu, Zhong-Yuan; Sun, Zhao-Yan

    2018-04-01

    We describe the algorithm of employing multi-GPU power on the basis of Message Passing Interface (MPI) domain decomposition in a molecular dynamics code, GALAMOST, which is designed for the coarse-grained simulation of soft matters. The code of multi-GPU version is developed based on our previous single-GPU version. In multi-GPU runs, one GPU takes charge of one domain and runs single-GPU code path. The communication between neighbouring domains takes a similar algorithm of CPU-based code of LAMMPS, but is optimised specifically for GPUs. We employ a memory-saving design which can enlarge maximum system size at the same device condition. An optimisation algorithm is employed to prolong the update period of neighbour list. We demonstrate good performance of multi-GPU runs on the simulation of Lennard-Jones liquid, dissipative particle dynamics liquid, polymer and nanoparticle composite, and two-patch particles on workstation. A good scaling of many nodes on cluster for two-patch particles is presented.

  1. Diffusion-Based Design of Multi-Layered Ophthalmic Lenses for Controlled Drug Release

    PubMed Central

    Pimenta, Andreia F. R.; Serro, Ana Paula; Paradiso, Patrizia; Saramago, Benilde

    2016-01-01

    The study of ocular drug delivery systems has been one of the most covered topics in drug delivery research. One potential drug carrier solution is the use of materials that are already commercially available in ophthalmic lenses for the correction of refractive errors. In this study, we present a diffusion-based mathematical model in which the parameters can be adjusted based on experimental results obtained under controlled conditions. The model allows for the design of multi-layered therapeutic ophthalmic lenses for controlled drug delivery. We show that the proper combination of materials with adequate drug diffusion coefficients, thicknesses and interfacial transport characteristics allows for the control of the delivery of drugs from multi-layered ophthalmic lenses, such that drug bursts can be minimized, and the release time can be maximized. As far as we know, this combination of a mathematical modelling approach with experimental validation of non-constant activity source lamellar structures, made of layers of different materials, accounting for the interface resistance to the drug diffusion, is a novel approach to the design of drug loaded multi-layered contact lenses. PMID:27936138

  2. A Numerical Investigation of the Extinction of Low Strain Rate Diffusion Flames by an Agent in Microgravity

    NASA Technical Reports Server (NTRS)

    Puri, Ishwar K.

    2004-01-01

    Our goal has been to investigate the influence of both dilution and radiation on the extinction process of nonpremixed flames at low strain rates. Simulations have been performed by using a counterflow code and three radiation models have been included in it, namely, the optically thin, the narrowband, and discrete ordinate models. The counterflow flame code OPPDIFF was modified to account for heat transfer losses by radiation from the hot gases. The discrete ordinate method (DOM) approximation was first suggested by Chandrasekhar for solving problems in interstellar atmospheres. Carlson and Lathrop developed the method for solving multi-dimensional problem in neutron transport. Only recently has the method received attention in the field of heat transfer. Due to the applicability of the discrete ordinate method for thermal radiation problems involving flames, the narrowband code RADCAL was modified to calculate the radiative properties of the gases. A non-premixed counterflow flame was simulated with the discrete ordinate method for radiative emissions. In comparison with two other models, it was found that the heat losses were comparable with the optically thin and simple narrowband model. The optically thin model had the highest heat losses followed by the DOM model and the narrow-band model.

  3. A comparison between implicit and hybrid methods for the calculation of steady and unsteady inlet flows

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Hsieh, T.

    1985-01-01

    Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.

  4. Development of ENDF/B-IV multigroup neutron cross-section libraries for the LEOPARD and LASER codes. Technical report on Phase 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenquin, U.P.; Stewart, K.B.; Heeb, C.M.

    1975-07-01

    The principal aim of this neutron cross-section research is to provide the utility industry with a 'standard nuclear data base' that will perform satisfactorily when used for analysis of thermal power reactor systems. EPRI is coordinating its activities with those of the Cross Section Evaluation Working Group (CSEWG), responsible for the development of the Evaluated Nuclear Data File-B (ENDF/B) library, in order to improve the performance of the ENDF/B library in thermal reactors and other applications of interest to the utility industry. Battelle-Northwest (BNW) was commissioned to process the ENDF/B Version-4 data files into a group-constant form for use inmore » the LASER and LEOPARD neutronics codes. Performance information on the library should provide the necessary feedback for improving the next version of the library, and a consistent data base is expected to be useful in intercomparing the versions of the LASER and LEOPARD codes presently being used by different utility groups. This report describes the BNW multi-group libraries and the procedures followed in their preparation and testing. (GRA)« less

  5. Multi-site Study of Diffusion Metric Variability: Characterizing the Effects of Site, Vendor, Field Strength, and Echo Time using the Histogram Distance.

    PubMed

    Helmer, K G; Chou, M-C; Preciado, R I; Gimi, B; Rollins, N K; Song, A; Turner, J; Mori, S

    2016-02-27

    MRI-based multi-site trials now routinely include some form of diffusion-weighted imaging (DWI) in their protocol. These studies can include data originating from scanners built by different vendors, each with their own set of unique protocol restrictions, including restrictions on the number of available gradient directions, whether an externally-generated list of gradient directions can be used, and restrictions on the echo time (TE). One challenge of multi-site studies is to create a common imaging protocol that will result in a reliable and accurate set of diffusion metrics. The present study describes the effect of site, scanner vendor, field strength, and TE on two common metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA). We have shown in earlier work that ROI metrics and the mean of MD and FA histograms are not sufficiently sensitive for use in site characterization. Here we use the distance between whole brain histograms of FA and MD to investigate within- and between-site effects. We concluded that the variability of DTI metrics due to site, vendor, field strength, and echo time could influence the results in multi-center trials and that histogram distance is sensitive metrics for each of these variables.

  6. Status of LANL Efforts to Effectively Use Sequoia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nystrom, William David

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  7. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  8. A cross-diffusion system derived from a Fokker-Planck equation with partial averaging

    NASA Astrophysics Data System (ADS)

    Jüngel, Ansgar; Zamponi, Nicola

    2017-02-01

    A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.

  9. Modeling of photon migration in the human lung using a finite volume solver

    NASA Astrophysics Data System (ADS)

    Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.

    2006-02-01

    The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).

  10. Tractography from HARDI using an Intrinsic Unscented Kalman Filter

    PubMed Central

    Cheng, Guang; Salehian, Hesamoddin; Forder, John R.; Vemuri, Baba C.

    2014-01-01

    A novel adaptation of the unscented Kalman filter (UKF) was recently introduced in literature for simultaneous multi-tensor estimation and fiber tractography from diffusion MRI. This technique has the advantage over other tractography methods in terms of computational efficiency, due to the fact that the UKF simultaneously estimates the diffusion tensors and propagates the most consistent direction to track along. This UKF and its variants reported later in literature however are not intrinsic to the space of diffusion tensors. Lack of this key property can possibly lead to inaccuracies in the multi-tensor estimation as well as in the tractography. In this paper, we propose a novel intrinsic unscented Kalman filter (IUKF) in the space of diffusion tensors which are symmetric positive definite matrices, that can be used for simultaneous recursive estimation of multi-tensors and propagation of directional information for use in fiber tractography from diffusion weighted MR data. In addition to being more accurate, IUKF retains all the advantages of UKF mentioned above. We demonstrate the accuracy and effectiveness of the proposed method via experiments publicly available phantom data from the fiber cup-challenge (MICCAI 2009) and diffusion weighted MR scans acquired from human brains and rat spinal cords. PMID:25203986

  11. Extended phase graphs with anisotropic diffusion

    NASA Astrophysics Data System (ADS)

    Weigel, M.; Schwenk, S.; Kiselev, V. G.; Scheffler, K.; Hennig, J.

    2010-08-01

    The extended phase graph (EPG) calculus gives an elegant pictorial description of magnetization response in multi-pulse MR sequences. The use of the EPG calculus enables a high computational efficiency for the quantitation of echo intensities even for complex sequences with multiple refocusing pulses with arbitrary flip angles. In this work, the EPG concept dealing with RF pulses with arbitrary flip angles and phases is extended to account for anisotropic diffusion in the presence of arbitrary varying gradients. The diffusion effect can be expressed by specific diffusion weightings of individual magnetization pathways. This can be represented as an action of a linear operator on the magnetization state. The algorithm allows easy integration of diffusion anisotropy effects. The formalism is validated on known examples from literature and used to calculate the effective diffusion weighting in multi-echo sequences with arbitrary refocusing flip angles.

  12. Building 1D resonance broadened quasilinear (RBQ) code for fast ions Alfvénic relaxations

    NASA Astrophysics Data System (ADS)

    Gorelenkov, Nikolai; Duarte, Vinicius; Berk, Herbert

    2016-10-01

    The performance of the burning plasma is limited by the confinement of superalfvenic fusion products, e.g. alpha particles, which are capable of resonating with the Alfvénic eigenmodes (AEs). The effect of AEs on fast ions is evaluated using a resonance line broadened diffusion coefficient. The interaction of fast ions and AEs is captured for cases where there are either isolated or overlapping modes. A new code RBQ1D is being built which constructs diffusion coefficients based on realistic eigenfunctions that are determined by the ideal MHD code NOVA. The wave particle interaction can be reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. Hence to a good approximation the Quasi-Linear (QL) diffusion equation only contains derivatives in the angular momentum. The diffusion equation is then one dimensional that is efficiently solved simultaneously for all particles with the equation for the evolution of the wave angular momentum. The evolution of fast ion constants of motion is governed by the QL diffusion equations which are adapted to find the ion distribution function.

  13. Modeling the Role of Incisures in Vertebrate Phototransduction

    PubMed Central

    Caruso, Giovanni; Bisegna, Paolo; Shen, Lixin; Andreucci, Daniele; Hamm, Heidi E.; DiBenedetto, Emmanuele

    2006-01-01

    Phototransduction is mediated by a G-protein-coupled receptor-mediated cascade, activated by light and localized to rod outer segment (ROS) disk membranes, which, in turn, drives a diffusion process of the second messengers cGMP and Ca2+ in the ROS cytosol. This process is hindered by disks—which, however, bear physical cracks, known as incisures, believed to favor the longitudinal diffusion of cGMP and Ca2+. This article is aimed at highlighting the biophysical functional role and significance of incisures, and their effect on the local and global response of the photocurrent. Previous work on this topic regarded the ROS as well stirred in the radial variables, lumped the diffusion mechanism on the longitudinal axis of the ROS, and replaced the cytosolic diffusion coefficients by effective ones, accounting for incisures through their total patent area only. The fully spatially resolved model recently published by our group is a natural tool to take into account other significant details of incisures, including their geometry and distribution. Using mathematical theories of homogenization and concentrated capacity, it is shown here that the complex diffusion process undergone by the second messengers cGMP and Ca2+ in the ROS bearing incisures can be modeled by a family of two-dimensional diffusion processes on the ROS cross sections, glued together by other two-dimensional diffusion processes, accounting for diffusion in the ROS outer shell and in the bladelike regions comprised by the stack of incisures. Based on this mathematical model, a code has been written, capable of incorporating an arbitrary number of incisures and activation sites, with any given arbitrary distribution within the ROS. The code is aimed at being an operational tool to perform numerical experiments of phototransduction, in rods with incisures of different geometry and structure, under a wide spectrum of operating conditions. The simulation results show that incisures have a dual biophysical function. On the one hand, since incisures line up from disk to disk, they create vertical cytoplasmic channels crossing the disks, thus facilitating diffusion of second messengers; on the other hand, at least in those species bearing multiple incisures, they divide the disks into lobes like the petals of a flower, thus confining the diffusion of activated phosphodiesterase and localizing the photon response. Accordingly, not only the total area of incisures, but their geometrical shape and distribution as well, significantly influence the global photoresponse. PMID:16714347

  14. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK

    PubMed Central

    2014-01-01

    Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437

  15. Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.

    PubMed

    Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi

    2014-04-11

    Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.

  16. Detecting recurrent gene mutation in interaction network context using multi-scale graph diffusion.

    PubMed

    Babaei, Sepideh; Hulsman, Marc; Reinders, Marcel; de Ridder, Jeroen

    2013-01-23

    Delineating the molecular drivers of cancer, i.e. determining cancer genes and the pathways which they deregulate, is an important challenge in cancer research. In this study, we aim to identify pathways of frequently mutated genes by exploiting their network neighborhood encoded in the protein-protein interaction network. To this end, we introduce a multi-scale diffusion kernel and apply it to a large collection of murine retroviral insertional mutagenesis data. The diffusion strength plays the role of scale parameter, determining the size of the network neighborhood that is taken into account. As a result, in addition to detecting genes with frequent mutations in their genomic vicinity, we find genes that harbor frequent mutations in their interaction network context. We identify densely connected components of known and putatively novel cancer genes and demonstrate that they are strongly enriched for cancer related pathways across the diffusion scales. Moreover, the mutations in the clusters exhibit a significant pattern of mutual exclusion, supporting the conjecture that such genes are functionally linked. Using multi-scale diffusion kernel, various infrequently mutated genes are found to harbor significant numbers of mutations in their interaction network neighborhood. Many of them are well-known cancer genes. The results demonstrate the importance of defining recurrent mutations while taking into account the interaction network context. Importantly, the putative cancer genes and networks detected in this study are found to be significant at different diffusion scales, confirming the necessity of a multi-scale analysis.

  17. Mechanic: The MPI/HDF code framework for dynamical astronomy

    NASA Astrophysics Data System (ADS)

    Słonina, Mariusz; Goździewski, Krzysztof; Migaszewski, Cezary

    2015-01-01

    We introduce the Mechanic, a new open-source code framework. It is designed to reduce the development effort of scientific applications by providing unified API (Application Programming Interface) for configuration, data storage and task management. The communication layer is based on the well-established Message Passing Interface (MPI) standard, which is widely used on variety of parallel computers and CPU-clusters. The data storage is performed within the Hierarchical Data Format (HDF5). The design of the code follows core-module approach which allows to reduce the user’s codebase and makes it portable for single- and multi-CPU environments. The framework may be used in a local user’s environment, without administrative access to the cluster, under the PBS or Slurm job schedulers. It may become a helper tool for a wide range of astronomical applications, particularly focused on processing large data sets, such as dynamical studies of long-term orbital evolution of planetary systems with Monte Carlo methods, dynamical maps or evolutionary algorithms. It has been already applied in numerical experiments conducted for Kepler-11 (Migaszewski et al., 2012) and νOctantis planetary systems (Goździewski et al., 2013). In this paper we describe the basics of the framework, including code listings for the implementation of a sample user’s module. The code is illustrated on a model Hamiltonian introduced by (Froeschlé et al., 2000) presenting the Arnold diffusion. The Arnold web is shown with the help of the MEGNO (Mean Exponential Growth of Nearby Orbits) fast indicator (Goździewski et al., 2008a) applied onto symplectic SABAn integrators family (Laskar and Robutel, 2001).

  18. Performance improvement of a centrifugal compressor stage by using different vaned diffusers

    NASA Astrophysics Data System (ADS)

    Zhang, Y. C.; Kong, X. Z.; Li, F.; Sun, W.; Chen, Q. G.

    2013-12-01

    The vaneless diffuser (VLD) is usually adopted in the traditional design of the multi-stage centrifugal compressor because of the stage's match problem. The drawback of the stage with vaneless diffusers is low efficiency. In order to increase the efficiency and at the same time, induce no significant decline in the operating range of the stage, three different types of vaned diffusers are designed and numerically investigated: the traditional vaned diffuser (TVD), the low-solidity cascade diffuser (LSD) and the partial-height vane diffuser (PVD). These three types of vaned diffusers have different influences on the performance of the centrifugal compressor. In the present investigation, the first part investigates the performance of a centrifugal compressor stage with three different vaned diffusers. The second part studies the influences of the height and the position of partial height vanes on the stage performance, and discusses the matching problem between the PVD and the downstream return channel. The stage investigated in this paper includes the impeller, the diffuser, the bend and the return channel. In the process of numerical investigation, the flow is assumed to be steady, and this process includes calculation and simulation. The calculation of 3-D turbulent flow in the stage uses the commercial CFD code NUMECA together with the Spalart-Allmaras turbulence model. The simulation of the computational region includes the impeller passages, the diffuser passages and return channel passages. The structure and surrounding region are assumed to have a perfect cyclic symmetry, so the single channel model and periodic boundary condition are applied at the middle of the passage, that is to reduce the calculation region to only one region. The investigation showed that the low-solidity cascade diffuser would be a better choice as a middle course for the first stage of the multistage centrifugal compressor. Besides, the influences of the height and the position of partial height vanes on the stage performance are intensively investigated and concluded at the design point, the isentropic efficiency and the static pressure ratio of the stage are improved with the increasing of the partial vane's height, and that installing the half-height vanes on the shroud side the stage would obtain a more uniform diffuser outflow and a better aerodynamic performance.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temporal, Mauro; Canaud, Benoit; Cayzac, Witold

    The alpha-particle energy deposition mechanism modifies the ignition conditions of the thermonuclear Deuterium-Tritium fusion reactions, and constitutes a key issue in achieving high gain in Inertial Confinement Fusion implosions. One-dimensional hydrodynamic calculations have been performed with the code Multi-IFE to simulate the implosion of a capsule directly irradiated by a laser beam. The diffusion approximation for the alpha energy deposition has been used to optimize three laser profiles corresponding to different implosion velocities. A Monte-Carlo package has been included in Multi-IFE to calculate the alpha energy transport, and in this case the energy deposition uses both the LP and themore » BPS stopping power models. Homothetic transformations that maintain a constant implosion velocity have been used to map out the transition region between marginally-igniting and high-gain configurations. Furthermore, the results provided by the two models have been compared and it is found that – close to the ignition threshold – in order to produce the same fusion energy, the calculations performed with the BPS model require about 10% more invested energy with respect to the LP model.« less

  20. Theory-based transport simulations of TFTR L-mode temperature profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bateman, G.

    1992-03-01

    The temperature profiles from a selection of Tokamak Fusion Test Reactor (TFTR) L-mode discharges (17{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Heating}, Amsterdam, 1990 (EPS, Petit-Lancy, Switzerland, 1990, p. 114)) are simulated with the 1 (1)/(2) -D baldur transport code (Comput. Phys. Commun. {bold 49}, 275 (1988)) using a combination of theoretically derived transport models, called the Multi-Mode Model (Comments Plasma Phys. Controlled Fusion {bold 11}, 165 (1988)). The present version of the Multi-Mode Model consists of effective thermal diffusivities resulting from trapped electron modes and ion temperature gradient ({eta}{submore » {ital i}}) modes, which dominate in the core of the plasma, together with resistive ballooning modes, which dominate in the periphery. Within the context of this transport model and the TFTR simulations reported here, the scaling of confinement with heating power comes from the temperature dependence of the {eta}{sub {ital i}} and trapped electron modes, while the scaling with current comes mostly from resistive ballooning modes.« less

  1. An upwind multigrid method for solving viscous flows on unstructured triangular meshes. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl Lawrence

    1993-01-01

    A multigrid algorithm is combined with an upwind scheme for solving the two dimensional Reynolds averaged Navier-Stokes equations on triangular meshes resulting in an efficient, accurate code for solving complex flows around multiple bodies. The relaxation scheme uses a backward-Euler time difference and relaxes the resulting linear system using a red-black procedure. Roe's flux-splitting scheme is used to discretize convective and pressure terms, while a central difference is used for the diffusive terms. The multigrid scheme is demonstrated for several flows around single and multi-element airfoils, including inviscid, laminar, and turbulent flows. The results show an appreciable speed up of the scheme for inviscid and laminar flows, and dramatic increases in efficiency for turbulent cases, especially those on increasingly refined grids.

  2. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  3. Importance of hydrophobic traps for proton diffusion in lyotropic liquid crystals

    DOE PAGES

    McDaniel, Jesse G.; Yethiraj, Arun

    2016-03-04

    The diffusion of protons in self-assembled systems is potentially important for the design of efficient proton exchange membranes. In this work, we study proton dynamics in a low-water content, lamellar phase of an sodium-carboxylate gemini surfactant/water system using computer simulations. The hopping of protons via the Grotthuss mechanism is explicity allowed through the multi-state empirical valence bond (MS-EVB) method. We find that the hydronium ion is trapped on the hydrophobic side of the surfactant-water interface, and proton diffusion then proceeds by hopping between surface sites. The importance of hydrophobic traps is surprising, because one would expect the hydronium ions tomore » be trapped at the charged head-groups. Finally, the physics illustrated in this system should be relevant to the proton dynamics in other amphiphilic membrane systems, whenever there exists exposed hydrophobic surface regions.« less

  4. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  5. Unified Models of Turbulence and Nonlinear Wave Evolution in the Extended Solar Corona and Solar Wind

    NASA Technical Reports Server (NTRS)

    Cranmer, Steven R.; Wagner, William (Technical Monitor)

    2003-01-01

    The PI (Cranmer) and Co-I (A. van Ballegooijen) made significant progress toward the goal of building a "unified model" of the dominant physical processes responsible for the acceleration of the solar wind. The approach outlined in the original proposal comprised two complementary pieces: (1) to further investigate individual physical processes under realistic coronal and solar wind conditions, and (2) to extract the dominant physical effects from simulations and apply them to a one-dimensional and time-independent model of plasma heating and acceleration. The accomplishments in the report period are thus divided into these two categories: 1a. Focused Study of Kinetic MHD Turbulence. We have developed a model of magnetohydrodynamic (MHD) turbulence in the extended solar corona that contains the effects of collisionless dissipation and anisotropic particle heating. A turbulent cascade is one possible way of generating small-scale fluctuations (easy to dissipate/heat) from a pre-existing population of low-frequency Alfven waves (difficult to dissipate/heat). We modeled the cascade as a combination of advection and diffusion in wavenumber space. The dominant spectral transfer occurs in the direction perpendicular to the background magnetic field. As expected from earlier models, this leads to a highly anisotropic fluctuation spectrum with a rapidly decaying tail in the parallel wavenumber direction. The wave power that decays to high enough frequencies to become ion cyclotron resonant depends on the relative strengths of advection and diffusion in the cascade. For the most realistic values of these parameters, though, there is insufficient power to heat protons and heavy ions. The dominant oblique waves undergo Landau damping, which implies strong parallel electron heating. We thus investigated the nonlinear evolution of the electron velocity distributions (VDFs) into parallel beams and discrete phase-space holes (similar to those seen in the terrestrial magnetosphere) which are an alternate means of heating protons via stochastic interactions similar to particle-particle collisions. 1b. Focused Study of the Multi-Mode Detailed Balance Formalism. The PI began to explore the feasibility of using the "weak turbulence," or detailed-balance theory of Tsytovich, Melrose, and others to encompass the relevant physics of the solar wind. This study did not go far, however, because if the "strong" MHD turbulence discussed above is a dominant player in the wind's acceleration region, this formalism is inherently not applicable to the corona. We will continue to study the various published approaches to the weak turbulence formalism, especially with an eye on ways to parameterize nonlinear wave reflection rates. 2. Building the Unified Model Code Architecture. We have begun developing the computational model of a time-steady open flux tube in the extended corona. The model will be "unified" in the sense that it will include (simultaneously for the first time) as many of the various proposed physical processes as possible, all on equal footing. To retain this generality, we have formulated the problem in two interconnected parts: a completely kinetic model for the particles, using the Monte Carlo approach, and a finite-difference approach for the self-consistent fluctuation spectra. The two codes are run sequentially and iteratively until complete consistency is achieved. The current version of the Monte Carlo code incorporates gravity, the zero-current electric field, magnetic mirroring, and collisions. The fluctuation code incorporates WKJ3 wave action conservation and the cascade/dissipation processes discussed above. The codes are being run for various test problems with known solutions. Planned additions to the codes include prescriptions for nonlinear wave steepening, kinetic velocity-space diffusion, and multi-mode coupling (including reflection and refraction).

  6. The Multitheoretical List of Therapeutic Interventions - 30 items (MULTI-30).

    PubMed

    Solomonov, Nili; McCarthy, Kevin S; Gorman, Bernard S; Barber, Jacques P

    2018-01-16

    To develop a brief version of the Multitheoretical List of Therapeutic Interventions (MULTI-60) in order to decrease completion time burden by approximately half, while maintaining content coverage. Study 1 aimed to select 30 items. Study 2 aimed to examine the reliability and internal consistency of the MULTI-30. Study 3 aimed to validate the MULTI-30 and ensure content coverage. In Study 1, the sample included 186 therapist and 255 patient MULTI ratings, and 164 ratings of sessions coded by trained observers. Internal consistency (Chronbach's alpha and McDonald's omega) was calculated and confirmatory factor analysis was conducted. Psychotherapy experts rated content relevance. Study 2 included a sample of 644 patient and 522 therapist ratings, and 793 codings of psychotherapy sessions. In Study 3, the sample included 33 codings of sessions. A series of regression analyses was conducted to examine replication of previously published findings using the MULTI-30. The MULTI-30 was found valid, reliable, and internally consistent across 2564 ratings examined across the three studies presented. The MULTI-30 a brief and reliable process measure. Future studies are required for further validation.

  7. Neural decoding of collective wisdom with multi-brain computing.

    PubMed

    Eckstein, Miguel P; Das, Koel; Pham, Binh T; Peterson, Matthew F; Abbey, Craig K; Sy, Jocelyn L; Giesbrecht, Barry

    2012-01-02

    Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and multi-brain computing with twenty humans making perceptual decisions to show that combining neural activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and time-points while increasing number of brains demonstrate unique benefits arising from integrating neural activity across different brains. The benefits of multi-brain integration are present in neural activity as early as 200 ms after stimulus presentation in lateral occipital sites and no additional benefits arise in decision related neural activity. Sensory-related neural activity can predict collective choices reached by aggregating individual opinions, voting results, and decision confidence as accurately as neural activity related to decision components. Estimation of the potential for the collective to execute fast decisions by combining information across numerous brains, a strategy prevalent in many animals, shows large time-savings. Together, the findings suggest that for perceptual decisions the neural activity supporting collective wisdom and decisions arises in early sensory stages and that many properties of collective cognition are explainable by the neural coding of information across multiple brains. Finally, our methods highlight the potential of multi-brain computing as a technique to rapidly and in parallel gather increased information about the environment as well as to access collective perceptual/cognitive choices and mental states. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Comparison of analytical and experimental performance of a wind-tunnel diffuser section

    NASA Technical Reports Server (NTRS)

    Shyne, R. J.; Moore, R. D.; Boldman, D. R.

    1986-01-01

    Wind tunnel diffuser performance is evaluated by comparing experimental data with analytical results predicted by an one-dimensional integration procedure with skin friction coefficient, a two-dimensional interactive boundary layer procedure for analyzing conical diffusers, and a two-dimensional, integral, compressible laminar and turbulent boundary layer code. Pressure, temperature, and velocity data for a 3.25 deg equivalent cone half-angle diffuser (37.3 in., 94.742 cm outlet diameter) was obtained from the one-tenth scale Altitude Wind Tunnel modeling program at the NASA Lewis Research Center. The comparison is performed at Mach numbers of 0.162 (Re = 3.097x19(6)), 0.326 (Re = 6.2737x19(6)), and 0.363 (Re = 7.0129x10(6)). The Reynolds numbers are all based on an inlet diffuser diameter of 32.4 in., 82.296 cm, and reasonable quantitative agreement was obtained between the experimental data and computational codes.

  9. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  10. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  11. Extended phase graphs with anisotropic diffusion.

    PubMed

    Weigel, M; Schwenk, S; Kiselev, V G; Scheffler, K; Hennig, J

    2010-08-01

    The extended phase graph (EPG) calculus gives an elegant pictorial description of magnetization response in multi-pulse MR sequences. The use of the EPG calculus enables a high computational efficiency for the quantitation of echo intensities even for complex sequences with multiple refocusing pulses with arbitrary flip angles. In this work, the EPG concept dealing with RF pulses with arbitrary flip angles and phases is extended to account for anisotropic diffusion in the presence of arbitrary varying gradients. The diffusion effect can be expressed by specific diffusion weightings of individual magnetization pathways. This can be represented as an action of a linear operator on the magnetization state. The algorithm allows easy integration of diffusion anisotropy effects. The formalism is validated on known examples from literature and used to calculate the effective diffusion weighting in multi-echo sequences with arbitrary refocusing flip angles. Copyright 2010 Elsevier Inc. All rights reserved.

  12. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  13. FANS-3D Users Guide (ESTEP Project ER 201031)

    DTIC Science & Technology

    2016-08-01

    governing laminar and turbulent flows in body-fitted curvilinear grids. The code employs multi-block overset ( chimera ) grids, including fully matched...governing incompressible flow in body-fitted grids. The code allows for multi-block overset ( chimera ) grids, which can be fully matched, arbitrarily...interested reader may consult the Chimera Overset Structured Mesh-Interpolation Code (COSMIC) Users’ Manual (Chen, 2009). The input file used for

  14. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    DOE PAGES

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2014-11-23

    This study describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.

  15. Multi-site Study of Diffusion Metric Variability: Characterizing the Effects of Site, Vendor, Field Strength, and Echo Time using the Histogram Distance

    PubMed Central

    Helmer, K. G.; Chou, M-C.; Preciado, R. I.; Gimi, B.; Rollins, N. K.; Song, A.; Turner, J.; Mori, S.

    2016-01-01

    MRI-based multi-site trials now routinely include some form of diffusion-weighted imaging (DWI) in their protocol. These studies can include data originating from scanners built by different vendors, each with their own set of unique protocol restrictions, including restrictions on the number of available gradient directions, whether an externally-generated list of gradient directions can be used, and restrictions on the echo time (TE). One challenge of multi-site studies is to create a common imaging protocol that will result in a reliable and accurate set of diffusion metrics. The present study describes the effect of site, scanner vendor, field strength, and TE on two common metrics: the first moment of the diffusion tensor field (mean diffusivity, MD), and the fractional anisotropy (FA). We have shown in earlier work that ROI metrics and the mean of MD and FA histograms are not sufficiently sensitive for use in site characterization. Here we use the distance between whole brain histograms of FA and MD to investigate within- and between-site effects. We concluded that the variability of DTI metrics due to site, vendor, field strength, and echo time could influence the results in multi-center trials and that histogram distance is sensitive metrics for each of these variables. PMID:27350723

  16. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  17. Anisotropic diffusion in mesh-free numerical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2017-04-01

    We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.

  18. Improvements in the MGA Code Provide Flexibility and Better Error Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruhter, W D; Kerr, J

    2005-05-26

    The Multi-Group Analysis (MGA) code is widely used to determine nondestructively the relative isotopic abundances of plutonium by gamma-ray spectrometry. MGA users have expressed concern about the lack of flexibility and transparency in the code. Users often have to ask the code developers for modifications to the code to accommodate new measurement situations, such as additional peaks being present in the plutonium spectrum or expected peaks being absent. We are testing several new improvements to a prototype, general gamma-ray isotopic analysis tool with the intent of either revising or replacing the MGA code. These improvements will give the user themore » ability to modify, add, or delete the gamma- and x-ray energies and branching intensities used by the code in determining a more precise gain and in the determination of the relative detection efficiency. We have also fully integrated the determination of the relative isotopic abundances with the determination of the relative detection efficiency to provide a more accurate determination of the errors in the relative isotopic abundances. We provide details in this paper on these improvements and a comparison of results obtained with current versions of the MGA code.« less

  19. Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1994-01-01

    The primary accomplishments of the project were as follows: (1) From an overall standpoint, the primary accomplishment of this research was the development of a complete gasdynamic-radiatively coupled nonequilibrium viscous shock layer solution method for axisymmetric blunt bodies. This method can be used for rapid engineering modeling of nonequilibrium re-entry flowfields over a wide range of conditions. (2) Another significant accomplishment was the development of an air radiation model that included local thermodynamic nonequilibrium (LTNE) phenomena. (3) As part of this research, three electron-electronic energy models were developed. The first was a quasi-equilibrium electron (QEE) model which determined an effective free electron temperature and assumed that the electronic states were in equilibrium with the free electrons. The second was a quasi-equilibrium electron-electronic (QEEE) model which computed an effective electron-electronic temperature. The third model was a full electron-electronic (FEE) differential equation model which included convective, collisional, viscous, conductive, vibrational coupling, and chemical effects on electron-electronic energy. (4) Since vibration-dissociation coupling phenomena as well as vibrational thermal nonequilibrium phenomena are important in the nonequilibrium zone behind a shock front, a vibrational energy and vibration-dissociation coupling model was developed and included in the flowfield model. This model was a modified coupled vibrational dissociation vibrational (MCVDV) model and also included electron-vibrational coupling. (5) Another accomplishment of the project was the usage of the developed models to investigate radiative heating. (6) A multi-component diffusion model which properly models the multi-component nature of diffusion in complex gas mixtures such as air, was developed and incorporated into the blunt body model. (7) A model was developed to predict the magnitude and characteristics of the shock wave precursor ahead of vehicles entering the Earth's atmosphere. (8) Since considerable data exists for radiating nonequilibrium flow behind normal shock waves, a normal shock wave version of the blunt body code was developed. (9) By comparing predictions from the models and codes with available normal shock data and the flight data of Fire II, it is believed that the developed flowfield and nonequilibrium radiation models have been essentially validated for engineering applications.

  20. Cosmic-ray propagation with DRAGON2: I. numerical solver and astrophysical ingredients

    NASA Astrophysics Data System (ADS)

    Evoli, Carmelo; Gaggero, Daniele; Vittino, Andrea; Di Bernardo, Giuseppe; Di Mauro, Mattia; Ligorini, Arianna; Ullio, Piero; Grasso, Dario

    2017-02-01

    We present version 2 of the DRAGON code designed for computing realistic predictions of the CR densities in the Galaxy. The code numerically solves the interstellar CR transport equation (including inhomogeneous and anisotropic diffusion, either in space and momentum, advective transport and energy losses), under realistic conditions. The new version includes an updated numerical solver and several models for the astrophysical ingredients involved in the transport equation. Improvements in the accuracy of the numerical solution are proved against analytical solutions and in reference diffusion scenarios. The novel features implemented in the code allow to simulate the diverse scenarios proposed to reproduce the most recent measurements of local and diffuse CR fluxes, going beyond the limitations of the homogeneous galactic transport paradigm. To this end, several applications using DRAGON2 are presented as well. This new version facilitates the users to include their own physical models by means of a modular C++ structure.

  1. Coding Theory Information Theory and Radar

    DTIC Science & Technology

    2005-01-01

    the design and synthesis of artificial multiagent systems and for the understanding of human decision-making processes. This... altruism that may exist in a complex society. SGT derives its ability to account simultaneously for both group and individual interests from the structure of ...satisficing decision theory as a model of human decision mak- ing. 2 Multi-Attribute Decision Making Many decision problems involve the consideration of

  2. Evaluation of multi-modal, multi-site neuroimaging measures in Huntington's disease: Baseline results from the PADDINGTON study☆

    PubMed Central

    Hobbs, Nicola Z.; Cole, James H.; Farmer, Ruth E.; Rees, Elin M.; Crawford, Helen E.; Malone, Ian B.; Roos, Raymund A.C.; Sprengelmeyer, Reiner; Durr, Alexandra; Landwehrmeyer, Bernhard; Scahill, Rachael I.; Tabrizi, Sarah J.; Frost, Chris

    2012-01-01

    Background Macro- and micro-structural neuroimaging measures provide valuable information on the pathophysiology of Huntington's disease (HD) and are proposed as biomarkers. Despite theoretical advantages of microstructural measures in terms of sensitivity to pathology, there is little evidence directly comparing the two. Methods 40 controls and 61 early HD subjects underwent 3 T MRI (T1- and diffusion-weighted), as part of the PADDINGTON study. Macrostructural volumetrics were obtained for the whole brain, caudate, putamen, corpus callosum (CC) and ventricles. Microstructural diffusion metrics of fractional anisotropy (FA), mean-, radial- and axial-diffusivity (MD, RD, AD) were computed for white matter (WM), CC, caudate and putamen. Group differences were examined adjusting for age, gender and site. A formal comparison of effect sizes determined which modality and metrics provided a statistically significant advantage over others. Results Macrostructural measures showed decreased regional and global volume in HD (p < 0.001); except the ventricles which were enlarged (p < 0.01). In HD, FA was increased in the deep grey-matter structures (p < 0.001), and decreased in the WM (CC, p = 0.035; WM, p = 0.053); diffusivity metrics (MD, RD, AD) were increased for all brain regions (p < 0.001). The largest effect sizes were for putamen volume, caudate volume and putamen diffusivity (AD, RD and MD); each was significantly larger than those for all other metrics (p < 0.05). Conclusion The highest performing macro- and micro-structural metrics had similar sensitivity to HD pathology quantified via effect sizes. Region-of-interest may be more important than imaging modality, with deep grey-matter regions outperforming the CC and global measures, for both volume and diffusivity. FA appears to be relatively insensitive to disease effects. PMID:24179770

  3. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    PubMed

    Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li

    2014-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.

  4. Learning to See Differently: Viewing Technology Diffusion in Teacher Education through the Lens of Organizational Change

    ERIC Educational Resources Information Center

    Wang, Yu-Mei; Patterson, Jerry

    2006-01-01

    While the discussion on the topic of technology diffusion in teacher education primarily centers on course design, program development, and faculty technology training, this article explores technology diffusion from the perspective of organizational change. Technology diffusion in teacher education is a multi-faceted task and, therefore, requires…

  5. PIV measurements in a compact return diffuser under multi-conditions

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Lu, W. G.; Shi, W. D.

    2013-12-01

    Due to the complex three-dimensional geometries of impellers and diffusers, their design is a delicate and difficult task. Slight change could lead to significant changes in hydraulic performance and internal flow structure. Conversely, the grasp of the pump's internal flow pattern could benefit from pump design improvement. The internal flow fields in a compact return diffuser have been investigated experimentally under multi-conditions. A special Particle Image Velocimetry (PIV) test rig is designed, and the two-dimensional PIV measurements are successfully conducted in the diffuser mid-plane to capture the complex flow patterns. The analysis of the obtained results has been focused on the flow structure in diffuser, especially under part-load conditions. The vortex and recirculation flow patterns in diffuser are captured and analysed accordingly. Strong flow separation and back flow appeared at the part-load flow rates. Under the design and over-load conditions, the flow fields in diffuser are uniform, and the flow separation and back flow appear at the part-load flow rates, strong back flow is captured at one diffuser passage under 0.2Qdes.

  6. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  7. DREAM-3D and the importance of model inputs and boundary conditions

    NASA Astrophysics Data System (ADS)

    Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue

    2015-04-01

    Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.

  8. Structural and Thermodynamic Factors of Suppressed Interdiffusion Kinetics in Multi-component High-entropy Materials

    PubMed Central

    Chang, Shou-Yi; Li, Chen-En; Huang, Yi-Chung; Hsu, Hsun-Feng; Yeh, Jien-Wei; Lin, Su-Jien

    2014-01-01

    We report multi-component high-entropy materials as extraordinarily robust diffusion barriers and clarify the highly suppressed interdiffusion kinetics in the multi-component materials from structural and thermodynamic perspectives. The failures of six alloy barriers with different numbers of elements, from unitary Ti to senary TiTaCrZrAlRu, against the interdiffusion of Cu and Si were characterized, and experimental results indicated that, with more elements incorporated, the failure temperature of the barriers increased from 550 to 900°C. The activation energy of Cu diffusion through the alloy barriers was determined to increase from 110 to 163 kJ/mole. Mechanistic analyses suggest that, structurally, severe lattice distortion strains and a high packing density caused by different atom sizes, and, thermodynamically, a strengthened cohesion provide a total increase of 55 kJ/mole in the activation energy of substitutional Cu diffusion, and are believed to be the dominant factors of suppressed interdiffusion kinetics through the multi-component barrier materials. PMID:24561911

  9. Diffusion tensor imaging of early changes in corpus callosum after acute cerebral hemisphere lesions in newborns.

    PubMed

    Righini, Andrea; Doneda, Chiara; Parazzini, Cecilia; Arrigoni, Filippo; Matta, Ursula; Triulzi, Fabio

    2010-11-01

    The main purpose was to investigate any early diffusion tensor imaging (DTI) changes in corpus callosum (CC) associated with acute cerebral hemisphere lesions in term newborns. We retrospectively analysed 19 cases of term newborns acutely affected by focal or multi-focal lesions: hypoxic-ischemic encephalopathy, hypoglycaemic encephalopathy, focal ischemic stroke and deep medullary vein associated lesions. DTI was acquired at 1.5 Tesla with dedicated neonatal coil. DTI metrics (apparent diffusion coefficient (ADC), fractional anisotropy (FA), axial λ(∐) and radial λ(⟂) diffusivity) were measured in the hemisphere lesions and in the CC. The control group included seven normal newborns. The following significant differences were found between patients and normal controls in the CC: mean ADC was lower in patients (0.88 SD 0.23 versus 1.18 SD 0.07 μm(2)/s) and so was mean FA (0.50 SD 0.1 versus 0.67 SD 0.05) and mean λ(∐) value (1.61 SD 0.52 versus 2.36 SD 0.14 μm(2)/s). In CC the percentage of ADC always diminished independently of lesion age (with one exception), whereas in hemisphere lesions, it was negative in earlier lesions, but exceeded normal values in the older lesions. CC may undergo early DTI changes in newborns with acute focal or multi-focal hemisphere lesions of different aetiology. Although a direct insult to CC cannot be totally ruled out, DTI changes in CC (in particular λ(∐)) may also be compatible with very early Wallerian degeneration or pre-Wallerian degeneration.

  10. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  11. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  12. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    NASA Technical Reports Server (NTRS)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  13. Simultaneously extracting multiple parameters via multi-distance and multi-exposure diffuse speckle contrast analysis

    PubMed Central

    Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua

    2017-01-01

    Recent advancements in diffuse speckle contrast analysis (DSCA) have opened the path for noninvasive acquisition of deep tissue microvasculature blood flow. In fact, in addition to blood flow index αDB, the variations of tissue optical absorption μa, reduced scattering coefficients μs′, as well as coherence factor β can modulate temporal fluctuations of speckle patterns. In this study, we use multi-distance and multi-exposure DSCA (MDME-DSCA) to simultaneously extract multiple parameters such as μa, μs′, αDB, and β. The validity of MDME-DSCA has been validated by the simulated data and phantoms experiments. Moreover, as a comparison, the results also show that it is impractical to simultaneously obtain multiple parameters by multi-exposure DSCA (ME-DSCA). PMID:29082083

  14. Parcellation of the Healthy Neonatal Brain into 107 Regions Using Atlas Propagation through Intermediate Time Points in Childhood.

    PubMed

    Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G; Anblagan, Devasuda; Telford, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Macnaught, Gillian; Semple, Scott I; Bastin, Mark E; Boardman, James P

    2016-01-01

    Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39(+5) weeks, range 37(+2)-41(+6)). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development.

  15. Numerical Methods for Analysis of Charged Vacancy Diffusion in Dielectric Solids

    DTIC Science & Technology

    2006-12-01

    theory for charged vacancy diffusion in elastic dielectric materials is formulated and implemented numerically in a finite difference code. The...one of the co-authors on neutral vacancy kinetics (Grinfeld and Hazzledine, 1997). The theory is implemented numerically in a finite difference code...accuracy of order ( )2x∆ , using a finite difference approximation (Hoffman, 1992) for the second spatial derivative of φ : ( )21 1 0ˆ2 /i i i i Rxφ

  16. Surface acoustic wave coding for orthogonal frequency coded devices

    NASA Technical Reports Server (NTRS)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  17. Multi-phase SPH modelling of violent hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Mokos, Athanasios; Rogers, Benedict D.; Stansby, Peter K.; Domínguez, José M.

    2015-11-01

    This paper presents the acceleration of multi-phase smoothed particle hydrodynamics (SPH) using a graphics processing unit (GPU) enabling large numbers of particles (10-20 million) to be simulated on just a single GPU card. With novel hardware architectures such as a GPU, the optimum approach to implement a multi-phase scheme presents some new challenges. Many more particles must be included in the calculation and there are very different speeds of sound in each phase with the largest speed of sound determining the time step. This requires efficient computation. To take full advantage of the hardware acceleration provided by a single GPU for a multi-phase simulation, four different algorithms are investigated: conditional statements, binary operators, separate particle lists and an intermediate global function. Runtime results show that the optimum approach needs to employ separate cell and neighbour lists for each phase. The profiler shows that this approach leads to a reduction in both memory transactions and arithmetic operations giving significant runtime gains. The four different algorithms are compared to the efficiency of the optimised single-phase GPU code, DualSPHysics, for 2-D and 3-D simulations which indicate that the multi-phase functionality has a significant computational overhead. A comparison with an optimised CPU code shows a speed up of an order of magnitude over an OpenMP simulation with 8 threads and two orders of magnitude over a single thread simulation. A demonstration of the multi-phase SPH GPU code is provided by a 3-D dam break case impacting an obstacle. This shows better agreement with experimental results than an equivalent single-phase code. The multi-phase GPU code enables a convergence study to be undertaken on a single GPU with a large number of particles that otherwise would have required large high performance computing resources.

  18. Multi-model Analysis of Diffusion-weighted Imaging of Normal Testes at 3.0 T: Preliminary Findings.

    PubMed

    Min, Xiangde; Feng, Zhaoyan; Wang, Liang; Cai, Jie; Li, Basen; Ke, Zan; Zhang, Peipei; You, Huijuan; Yan, Xu

    2018-04-01

    This study aimed to establish diffusion quantitative parameters (apparent diffusion coefficient [ADC], DDC, α, D app , and K app ) in normal testes at 3.0 T. Sixty-four healthy volunteers in two age groups (A: 10-39 years; B: ≥ 40 years) underwent diffusion-weighted imaging scanning at 3.0 T. ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app were calculated using the mono-exponential, stretched-exponential, and kurtosis models. The correlations between parameters and the age were analyzed. The parameters were compared between the age groups and between the right and the left testes. The average ADC 1000 , ADC 2000 , ADC 3000 , DDC, α, D app , and K app values did not significantly differ between the right and the left testes (P > .05 for all). The following significant correlations were found: positive correlations between age and testicular ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app (r = 0.516, 0.518, 0.518, 0.521, and 0.516, respectively; P < .01 for all) and negative correlations between age and testicular α and K app (r = -0.363, -0.427, respectively; P < .01 for both). Compared to group B, in group A, ADC 1000 , ADC 2000 , ADC 3000 , DDC, and D app were significantly lower (P < .05 for all), but α and K app were significantly higher (P < .05 for both). Our study demonstrated the applicability of the testicular mono-exponential, stretched-exponential, and kurtosis models. Our results can help establish a baseline for the normal testicular parameters in these diffusion models. The contralateral normal testis can serve as a suitable reference for evaluating the abnormalities of the other side. The effect of age on these parameters requires further attention. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  19. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of

  20. Development of the 3DHZETRN code for space radiation protection

    NASA Astrophysics Data System (ADS)

    Wilson, John; Badavi, Francis; Slaba, Tony; Reddell, Brandon; Bahadori, Amir; Singleterry, Robert

    Space radiation protection requires computationally efficient shield assessment methods that have been verified and validated. The HZETRN code is the engineering design code used for low Earth orbit dosimetric analysis and astronaut record keeping with end-to-end validation to twenty percent in Space Shuttle and International Space Station operations. HZETRN treated diffusive leakage only at the distal surface limiting its application to systems with a large radius of curvature. A revision of HZETRN that included forward and backward diffusion allowed neutron leakage to be evaluated at both the near and distal surfaces. That revision provided a deterministic code of high computational efficiency that was in substantial agreement with Monte Carlo (MC) codes in flat plates (at least to the degree that MC codes agree among themselves). In the present paper, the 3DHZETRN formalism capable of evaluation in general geometry is described. Benchmarking will help quantify uncertainty with MC codes (Geant4, FLUKA, MCNP6, and PHITS) in simple shapes such as spheres within spherical shells and boxes. Connection of the 3DHZETRN to general geometry will be discussed.

  1. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  2. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    PubMed Central

    Lin, Kai; Wang, Di; Hu, Long

    2016-01-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302

  3. A conservative MHD scheme on unstructured Lagrangian grids for Z-pinch hydrodynamic simulations

    NASA Astrophysics Data System (ADS)

    Wu, Fuyuan; Ramis, Rafael; Li, Zhenghong

    2018-03-01

    A new algorithm to model resistive magnetohydrodynamics (MHD) in Z-pinches has been developed. Two-dimensional axisymmetric geometry with azimuthal magnetic field Bθ is considered. Discretization is carried out using unstructured meshes made up of arbitrarily connected polygons. The algorithm is fully conservative for mass, momentum, and energy. Matter energy and magnetic energy are managed separately. The diffusion of magnetic field is solved using a derivative of the Symmetric-Semi-Implicit scheme, Livne et al. (1985) [23], where unconditional stability is obtained without needing to solve large sparse systems of equations. This MHD package has been integrated into the radiation-hydrodynamics code MULTI-2D, Ramis et al. (2009) [20], that includes hydrodynamics, laser energy deposition, heat conduction, and radiation transport. This setup allows to simulate Z-pinch configurations relevant for Inertial Confinement Fusion.

  4. Towards energy-efficient nonoscillatory forward-in-time integrations on lat-lon grids

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin; Piotrowski, Zbigniew; Ryczkowski, Adam

    2017-04-01

    The design of the next-generation weather prediction models calls for new algorithmic approaches allowing for robust integrations of atmospheric flow over complex orography at sub-km resolutions. These need to be accompanied by efficient implementations exposing multi-level parallelism, capable to run on modern supercomputing architectures. Here we present the recent advances in the energy-efficient implementation of the consistent soundproof/implicit compressible EULAG dynamical core of the COSMO weather prediction framework. Based on the experiences of the atmospheric dwarfs developed within H2020 ESCAPE project, we develop efficient, architecture agnostic implementations of fully three-dimensional MPDATA advection schemes and generalized diffusion operator in curvilinear coordinates and spherical geometry. We compare optimized Fortran implementation with preliminary C++ implementation employing the Gridtools library, allowing for integrations on CPU and GPU while maintaining single source code.

  5. THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  6. THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool

    NASA Technical Reports Server (NTRS)

    Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)

    2000-01-01

    An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.

  7. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Multi-threading performance of Geant4, MCNP6, and PHITS Monte Carlo codes for tetrahedral-mesh geometry.

    PubMed

    Han, Min Cheol; Yeom, Yeon Soo; Lee, Hyun Su; Shin, Bangho; Kim, Chan Hyeong; Furuta, Takuya

    2018-05-04

    In this study, the multi-threading performance of the Geant4, MCNP6, and PHITS codes was evaluated as a function of the number of threads (N) and the complexity of the tetrahedral-mesh phantom. For this, three tetrahedral-mesh phantoms of varying complexity (simple, moderately complex, and highly complex) were prepared and implemented in the three different Monte Carlo codes, in photon and neutron transport simulations. Subsequently, for each case, the initialization time, calculation time, and memory usage were measured as a function of the number of threads used in the simulation. It was found that for all codes, the initialization time significantly increased with the complexity of the phantom, but not with the number of threads. Geant4 exhibited much longer initialization time than the other codes, especially for the complex phantom (MRCP). The improvement of computation speed due to the use of a multi-threaded code was calculated as the speed-up factor, the ratio of the computation speed on a multi-threaded code to the computation speed on a single-threaded code. Geant4 showed the best multi-threading performance among the codes considered in this study, with the speed-up factor almost linearly increasing with the number of threads, reaching ~30 when N  =  40. PHITS and MCNP6 showed a much smaller increase of the speed-up factor with the number of threads. For PHITS, the speed-up factors were low when N  =  40. For MCNP6, the increase of the speed-up factors was better, but they were still less than ~10 when N  =  40. As for memory usage, Geant4 was found to use more memory than the other codes. In addition, compared to that of the other codes, the memory usage of Geant4 more rapidly increased with the number of threads, reaching as high as ~74 GB when N  =  40 for the complex phantom (MRCP). It is notable that compared to that of the other codes, the memory usage of PHITS was much lower, regardless of both the complexity of the phantom and the number of threads, hardly increasing with the number of threads for the MRCP.

  10. Bayesian Atmospheric Radiative Transfer (BART): Model, Statistics Driver, and Application to HD 209458b

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.

    2014-11-01

    Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  11. Diffuse malignant peritoneal mesothelioma: Evaluation of systemic chemotherapy with comprehensive treatment through the RENAPE Database: Multi-Institutional Retrospective Study.

    PubMed

    Kepenekian, V; Elias, D; Passot, G; Mery, E; Goere, D; Delroeux, D; Quenet, F; Ferron, G; Pezet, D; Guilloit, J M; Meeus, P; Pocard, M; Bereder, J M; Abboud, K; Arvieux, C; Brigand, C; Marchal, F; Classe, J M; Lorimier, G; De Chaisemartin, C; Guyon, F; Mariani, P; Ortega-Deballon, P; Isaac, S; Maurice, C; Gilly, F N; Glehen, O

    2016-09-01

    Diffuse malignant peritoneal mesothelioma (DMPM) is a severe disease with mainly locoregional evolution. Cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (CRS-HIPEC) is the reported treatment with the longest survival. The aim of this study was to evaluate the impact of perioperative systemic chemotherapy strategies on survival and postoperative outcomes in patients with DMPM treated with curative intent with CRS-HIPEC, using a multi-institutional database: the French RENAPE network. From 1991 to 2014, 126 DMPM patients underwent CRS-HIPEC at 20 tertiary centres. The population was divided into four groups according to perioperative treatment: only neoadjuvant chemotherapy (NA), only adjuvant chemotherapy (ADJ), perioperative chemotherapy (PO) and no chemotherapy before or after CRS-HIPEC (NoC). All groups (NA: n = 42; ADJ: n = 16; PO: n = 16; NoC: n = 48) were comparable regarding clinicopathological data and main DMPM prognostic factors. After a median follow-up of 61 months, the 5-year overall survival (OS) was 40%, 67%, 62% and 56% in NA, ADJ, PO and NoC groups, respectively (P = 0.049). Major complications occurred for 41%, 45%, 35% and 41% of patients from NA, ADJ, PO and NoC groups, respectively (P = 0.299). In multivariate analysis, NA was independently associated with worse OS (hazard ratio, 2.30; 95% confidence interval, 1.07-4.94; P = 0.033). This retrospective study suggests that adjuvant chemotherapy may delay recurrence and improve survival and that NA may impact negatively the survival for patients with DMPM who underwent CRS-HIPEC with curative intent. Upfront CRS and HIPEC should be considered when achievable, waiting for stronger level of scientific evidence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  13. Experimental and computational data from a small rocket exhaust diffuser

    NASA Astrophysics Data System (ADS)

    Stephens, Samuel E.

    1993-06-01

    The Diagnostics Testbed Facility (DTF) at the NASA Stennis Space Center in Mississippi is a versatile facility that is used primarily to aid in the development of nonintrusive diagnostics for liquid rocket engine testing. The DTF consists of a fixed, 1200 lbf thrust, pressure fed, liquid oxygen/gaseous hydrogen rocket engine, and associated support systems. An exhaust diffuser has been fabricated and installed to provide subatmospheric pressures at the exit of the engine. The diffuser aerodynamic design was calculated prior to fabrication using the PARC Navier-Stokes computational fluid dynamics code. The diffuser was then fabricated and tested at the DTF. Experimental data from these tests were acquired to determine the operational characteristics of the system and to correlate the actual and predicted flow fields. The results show that a good engineering approximation of overall diffuser performance can be made using the PARC Navier-Stokes code and a simplified geometry. Correlations between actual and predicted cell pressure and initial plume expansion in the diffuser are good; however, the wall pressure profiles do not correlate as well with the experimental data.

  14. Derivation of effective fission gas diffusivities in UO2 from lower length scale simulations and implementation of fission gas diffusion models in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersson, Anders David Ragnar; Pastore, Giovanni; Liu, Xiang-Yang

    2014-11-07

    This report summarizes the development of new fission gas diffusion models from lower length scale simulations and assessment of these models in terms of annealing experiments and fission gas release simulations using the BISON fuel performance code. Based on the mechanisms established from density functional theory (DFT) and empirical potential calculations, continuum models for diffusion of xenon (Xe) in UO 2 were derived for both intrinsic conditions and under irradiation. The importance of the large X eU3O cluster (a Xe atom in a uranium + oxygen vacancy trap site with two bound uranium vacancies) is emphasized, which is a consequencemore » of its high mobility and stability. These models were implemented in the MARMOT phase field code, which is used to calculate effective Xe diffusivities for various irradiation conditions. The effective diffusivities were used in BISON to calculate fission gas release for a number of test cases. The results are assessed against experimental data and future directions for research are outlined based on the conclusions.« less

  15. [The application of multi-slice CT dynamic enhancement scan in the diagnosis and treatment of colonic lymphomas].

    PubMed

    Wang, Xi-ming; Wu, Le-bin; Zhang, Yun-ting; Li, Zhen-jia; Liu, Chen

    2006-11-01

    To discuss the value of multi-slice CT dynamic enhancement scan in the diagnosis and treatment of colonic lymphomas. 16 patients with colonic lymphomas underwent multi-slice CT dynamic enhancement scans, images of axial and reconstructive images of VR, MPR and CTVE were analyzed, patients were respectively diagnosed. Appearances of primary colorectal lymphomas were categorized into focal and diffuse lesions. Focal and diffuse lesions were 6 and 10 patients, respectively. The accuracy rate of diagnosis was 87.5%. MSCT dynamic scan has distinctive superiority in diagnosis and treatment of colonic lymphomas.

  16. A fractional motion diffusion model for grading pediatric brain tumors.

    PubMed

    Karaman, M Muge; Wang, He; Sui, Yi; Engelhard, Herbert H; Li, Yuhua; Zhou, Xiaohong Joe

    2016-01-01

    To demonstrate the feasibility of a novel fractional motion (FM) diffusion model for distinguishing low- versus high-grade pediatric brain tumors; and to investigate its possible advantage over apparent diffusion coefficient (ADC) and/or a previously reported continuous-time random-walk (CTRW) diffusion model. With approval from the institutional review board and written informed consents from the legal guardians of all participating patients, this study involved 70 children with histopathologically-proven brain tumors (30 low-grade and 40 high-grade). Multi- b -value diffusion images were acquired and analyzed using the FM, CTRW, and mono-exponential diffusion models. The FM parameters, D fm , φ , ψ (non-Gaussian diffusion statistical measures), and the CTRW parameters, D m , α , β (non-Gaussian temporal and spatial diffusion heterogeneity measures) were compared between the low- and high-grade tumor groups by using a Mann-Whitney-Wilcoxon U test. The performance of the FM model for differentiating between low- and high-grade tumors was evaluated and compared with that of the CTRW and the mono-exponential models using a receiver operating characteristic (ROC) analysis. The FM parameters were significantly lower ( p  < 0.0001) in the high-grade ( D fm : 0.81 ± 0.26, φ : 1.40 ± 0.10, ψ : 0.42 ± 0.11) than in the low-grade ( D fm : 1.52 ± 0.52, φ : 1.64 ± 0.13, ψ : 0.67 ± 0.13) tumor groups. The ROC analysis showed that the FM parameters offered better specificity (88% versus 73%), sensitivity (90% versus 82%), accuracy (88% versus 78%), and area under the curve (AUC, 93% versus 80%) in discriminating tumor malignancy compared to the conventional ADC. The performance of the FM model was similar to that of the CTRW model. Similar to the CTRW model, the FM model can improve differentiation between low- and high-grade pediatric brain tumors over ADC.

  17. THERMOS. 30-Group ENDF/B Scattered Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    1973-12-01

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less

  18. One-dimensional energetic particle quasilinear diffusion for realistic TAE instabilities

    NASA Astrophysics Data System (ADS)

    Duarte, Vinicius; Ghantous, Katy; Berk, Herbert; Gorelenkov, Nikolai

    2014-10-01

    Owing to the proximity of the characteristic phase (Alfvén) velocity and typical energetic particle (EP) superthermal velocities, toroidicity-induced Alfvén eigenmodes (TAEs) can be resonantly destabilized endangering the plasma performance. Thus, it is of ultimate importance to understand the deleterious effects on the confinement resulting from fast ion driven instabilities expected in fusion-grade plasmas. We propose to study the interaction of EPs and TAEs using a line broadened quasilinear model, which captures the interaction in both regimes of isolated and overlapping modes. The resonance particles diffuse in the phase space where the problem essentially reduces to one dimension with constant kinetic energy and the diffusion mainly along the canonical toroidal angular momentum. Mode structure and wave particle resonances are computed by the NOVA code and are used in a quasilinear diffusion code that is being written to study the evolution of the distribution function, under the assumption that they can be considered virtually unalterable during the diffusion. A new scheme for the resonant particle diffusion is being proposed that builds on the 1-D nature of the diffusion from a single mode, which leads to a momentum conserving difference scheme even when there is mode overlap.

  19. Viscous diffusion of vorticity in unsteady wall layers using the diffusion velocity concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strickland, J.H.; Kempka, S.N.; Wolfe, W.P.

    1995-03-01

    The primary purpose of this paper is to provide a careful evaluation of the diffusion velocity concept with regard to its ability to predict the diffusion of vorticity near a moving wall. A computer code BDIF has been written which simulates the evolution of the vorticity field near a wall of infinite length which is moving in an arbitrary fashion. The simulations generated by this code are found to give excellent results when compared to several exact solutions. We also outline a two-dimensional unsteady viscous boundary layer model which utilizes the diffusion velocity concept and is compatible with vortex methods.more » A primary goal of this boundary layer model is to minimize the number of vortices generated on the surface at each time step while achieving good resolution of the vorticity field near the wall. Preliminary results have been obtained for simulating a simple two-dimensional laminar boundary layer.« less

  20. Uneven-Layered Coding Metamaterial Tile for Ultra-wideband RCS Reduction and Diffuse Scattering.

    PubMed

    Su, Jianxun; He, Huan; Li, Zengrui; Yang, Yaoqing Lamar; Yin, Hongcheng; Wang, Junhong

    2018-05-25

    In this paper, a novel uneven-layered coding metamaterial tile is proposed for ultra-wideband radar cross section (RCS) reduction and diffuse scattering. The metamaterial tile is composed of two kinds of square ring unit cells with different layer thickness. The reflection phase difference of 180° (±37°) between two unit cells covers an ultra-wide frequency range. Due to the phase cancellation between two unit cells, the metamaterial tile has the scattering pattern of four strong lobes deviating from normal direction. The metamaterial tile and its 90-degree rotation can be encoded as the '0' and '1' elements to cover an object, and diffuse scattering pattern can be realized by optimizing phase distribution, leading to reductions of the monostatic and bi-static RCSs simultaneously. The metamaterial tile can achieve -10 dB RCS reduction from 6.2 GHz to 25.7 GHz with the ratio bandwidth of 4.15:1 at normal incidence. The measured and simulated results are in good agreement and validate the proposed uneven-layered coding metamaterial tile can greatly expanding the bandwidth for RCS reduction and diffuse scattering.

  1. Field-Integrated Studies of Long-Term Sustainability of Chromium Bioreduction at Hanford 100H Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Philip E.

    2006-06-01

    The objectives of the project are to investigate coupled hydraulic, geochemical, and microbial conditions, and to determine the critical biogeochemical parameters necessary to maximize the extent of Cr(VI) bioreduction and minimize Cr(III) reoxidation in groundwater. Specific goals of the project are as follows: (1) Field testing and monitoring of Cr(VI) bioreduction in ground water and its transformation into insoluble species of Cr(III) at the Hanford 100H site, to develop the optimal strategy of water sampling for chemical, microbial, stable isotope analyses, and noninvasive geophysical monitoring; (2) Bench-scale flow and transport investigations using columns of undisturbed sediments to obtain diffusion andmore » kinetic parameters needed for the development of a numerical model, predictions of Cr(VI) bioreduction, and potential of Cr(III) reoxidation; and (3) Development of a multiphase, multi-component 3D reactive transport model and a code, TOUGHREACT-BIO, to predict coupled biogeochemical-hydrological processes associated with bioremediation, and to calibrate and validate the developed code based on the results of bench-scale and field-scale Cr(VI) biostimulation experiments in ground water at the Hanford Site.« less

  2. Lessons Learned from Numerical Simulations of Interfacial Instabilities

    NASA Astrophysics Data System (ADS)

    Cook, Andrew

    2015-11-01

    Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM) and Kelvin-Helmholtz (KH) instabilities serve as efficient mixing mechanisms in a wide variety of flows, from supernovae to jet engines. Over the past decade, we have used the Miranda code to temporally integrate the multi-component Navier-Stokes equations at spatial resolutions up to 29 billion grid points. The code employs 10th-order compact schemes for spatial derivatives, combined with 4th-order Runge-Kutta time advancement. Some of our major findings are as follows: The rate of growth of a mixing layer is equivalent to the net mass flux through the equi-molar plane. RT growth rates can be significantly reduced by adding shear. RT instability can produce shock waves. The growth rate of RM instability can be predicted from known interfacial perturbations. RM vortex projectiles can far outrun the mixing region. Thermal fluctuations in molecular dynamics simulations can seed instabilities along the braids in KH instability. And finally, enthalpy diffusion is essential in preserving the second law of thermodynamics. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  3. Trellis phase codes for power-bandwith efficient satellite communications

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.

    1981-01-01

    Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.

  4. Study of SOL in DIII-D tokamak with SOLPS suite of codes.

    NASA Astrophysics Data System (ADS)

    Pankin, Alexei; Bateman, Glenn; Brennan, Dylan; Coster, David; Hogan, John; Kritz, Arnold; Kukushkin, Andrey; Schnack, Dalton; Snyder, Phil

    2005-10-01

    The scrape-of-layer (SOL) region in DIII-D tokamak is studied with the SOLPS integrated suite of codes. The SOLPS package includes the 3D multi-species Monte-Carlo neutral code EIRINE and 2D multi-fluid code B2. The EIRINE and B2 codes are cross-coupled through B2-EIRINE interface. The results of SOLPS simulations are used in the integrated modeling of the plasma edge in DIII-D tokamak with the ASTRA transport code. Parameterized dependences for neutral particle fluxes that are computed with the SOLPS code are implemented in a model for the H-mode pedestal and ELMs [1] in the ASTRA code. The effects of neutrals on the H-mode pedestal and ELMs are studied in this report. [1] A. Y. Pankin, I. Voitsekhovitch, G. Bateman, et al., Plasma Phys. Control. Fusion 47, 483 (2005).

  5. Cosmic-ray propagation with DRAGON2: I. numerical solver and astrophysical ingredients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evoli, Carmelo; Gaggero, Daniele; Vittino, Andrea

    2017-02-01

    We present version 2 of the DRAGON code designed for computing realistic predictions of the CR densities in the Galaxy. The code numerically solves the interstellar CR transport equation (including inhomogeneous and anisotropic diffusion, either in space and momentum, advective transport and energy losses), under realistic conditions. The new version includes an updated numerical solver and several models for the astrophysical ingredients involved in the transport equation. Improvements in the accuracy of the numerical solution are proved against analytical solutions and in reference diffusion scenarios. The novel features implemented in the code allow to simulate the diverse scenarios proposed tomore » reproduce the most recent measurements of local and diffuse CR fluxes, going beyond the limitations of the homogeneous galactic transport paradigm. To this end, several applications using DRAGON2 are presented as well. This new version facilitates the users to include their own physical models by means of a modular C++ structure.« less

  6. Improvements to Busquet's Non LTE algorithm in NRL's Hydro code

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Colombant, D.

    1996-11-01

    Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.

  7. Diffuse low-grade glioma: a review on the new molecular classification, natural history and current management strategies.

    PubMed

    Delgado-López, P D; Corrales-García, E M; Martino, J; Lastra-Aras, E; Dueñas-Polo, M T

    2017-08-01

    The management of diffuse supratentorial WHO grade II glioma remains a challenge because of the infiltrative nature of the tumor, which precludes curative therapy after total or even supratotal resection. When possible, functional-guided resection is the preferred initial treatment. Total and subtotal resections correlate with increased overall survival. High-risk patients (age >40, partial resection), especially IDH-mutated and 1p19q-codeleted oligodendroglial lesions, benefit from surgery plus adjuvant chemoradiation. Under the new 2016 WHO brain tumor classification, which now incorporates molecular parameters, all diffusely infiltrating gliomas are grouped together since they share specific genetic mutations and prognostic factors. Although low-grade gliomas cannot be regarded as benign tumors, large observational studies have shown that median survival can actually be doubled if an early, aggressive, multi-stage and personalized therapy is applied, as compared to prior wait-and-see policy series. Patients need an honest long-term therapeutic strategy that should ideally anticipate neurological, cognitive and histopathologic worsening.

  8. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  9. Enhanced modeling features within TREETOPS

    NASA Technical Reports Server (NTRS)

    Vandervoort, R. J.; Kumar, Manoj N.

    1989-01-01

    The original motivation for TREETOPS was to build a generic multi-body simulation and remove the burden of writing multi-body equations from the engineers. The motivation of the enhancement was twofold: (1) to extend the menu of built-in features (sensors, actuators, constraints, etc.) that did not require user code; and (2) to extend the control system design capabilities by linking with other government funded software (NASTRAN and MATLAB). These enhancements also serve to bridge the gap between structures and control groups. It is common on large space programs for the structures groups to build hi-fidelity models of the structure using NASTRAN and for the controls group to build lower order models because they lack the tools to incorporate the former into their analysis. Now the controls engineers can accept the hi-fidelity NASTRAN models into TREETOPS, add sensors and actuators, perform model reduction and couple the result directly into MATLAB to perform their design. The controller can then be imported directly into TREETOPS for non-linear, time-history simulation.

  10. An experimental investigation of compressible three-dimensional boundary layer flow in annular diffusers

    NASA Technical Reports Server (NTRS)

    Om, Deepak; Childs, Morris E.

    1987-01-01

    An experimental study is described in which detailed wall pressure measurements have been obtained for compressible three-dimensional unseparated boundary layer flow in annular diffusers with and without normal shock waves. Detailed mean flow-field data were also obtained for the diffuser flow without a shock wave. Two diffuser flows with shock waves were investigated. In one case, the normal shock existed over the complete annulus whereas in the second case, the shock existed over a part of the annulus. The data obtained can be used to validate computational codes for predicting such flow fields. The details of the flow field without the shock wave show flow reversal in the circumferential direction on both inner and outer surfaces. However, there is a lag in the flow reversal between the inner nad the outer surfaces. This is an interesting feature of this flow and should be a good test for the computational codes.

  11. Simulation of gaseous diffusion in partially saturated porous media under variable gravity with lattice Boltzmann methods

    NASA Technical Reports Server (NTRS)

    Chau, Jessica Furrer; Or, Dani; Sukop, Michael C.; Steinberg, S. L. (Principal Investigator)

    2005-01-01

    Liquid distributions in unsaturated porous media under different gravitational accelerations and corresponding macroscopic gaseous diffusion coefficients were investigated to enhance understanding of plant growth conditions in microgravity. We used a single-component, multiphase lattice Boltzmann code to simulate liquid configurations in two-dimensional porous media at varying water contents for different gravity conditions and measured gas diffusion through the media using a multicomponent lattice Boltzmann code. The relative diffusion coefficients (D rel) for simulations with and without gravity as functions of air-filled porosity were in good agreement with measured data and established models. We found significant differences in liquid configuration in porous media, leading to reductions in D rel of up to 25% under zero gravity. The study highlights potential applications of the lattice Boltzmann method for rapid and cost-effective evaluation of alternative plant growth media designs under variable gravity.

  12. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  13. CFD-ACE+: a CAD system for simulation and modeling of MEMS

    NASA Astrophysics Data System (ADS)

    Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha

    1999-03-01

    Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.

  14. Enforcing realizability in explicit multi-component species transport

    PubMed Central

    McDermott, Randall J.; Floyd, Jason E.

    2015-01-01

    We propose a strategy to guarantee realizability of species mass fractions in explicit time integration of the partial differential equations governing fire dynamics, which is a multi-component transport problem (realizability requires all mass fractions are greater than or equal to zero and that the sum equals unity). For a mixture of n species, the conventional strategy is to solve for n − 1 species mass fractions and to obtain the nth (or “background”) species mass fraction from one minus the sum of the others. The numerical difficulties inherent in the background species approach are discussed and the potential for realizability violations is illustrated. The new strategy solves all n species transport equations and obtains density from the sum of the species mass densities. To guarantee realizability the species mass densities must remain positive (semidefinite). A scalar boundedness correction is proposed that is based on a minimal diffusion operator. The overall scheme is implemented in a publicly available large-eddy simulation code called the Fire Dynamics Simulator. A set of test cases is presented to verify that the new strategy enforces realizability, does not generate spurious mass, and maintains second-order accuracy for transport. PMID:26692634

  15. Effects of alpha stopping power modelling on the ignition threshold in a directly-driven inertial confinement fusion capsule

    DOE PAGES

    Temporal, Mauro; Canaud, Benoit; Cayzac, Witold; ...

    2017-05-25

    The alpha-particle energy deposition mechanism modifies the ignition conditions of the thermonuclear Deuterium-Tritium fusion reactions, and constitutes a key issue in achieving high gain in Inertial Confinement Fusion implosions. One-dimensional hydrodynamic calculations have been performed with the code Multi-IFE to simulate the implosion of a capsule directly irradiated by a laser beam. The diffusion approximation for the alpha energy deposition has been used to optimize three laser profiles corresponding to different implosion velocities. A Monte-Carlo package has been included in Multi-IFE to calculate the alpha energy transport, and in this case the energy deposition uses both the LP and themore » BPS stopping power models. Homothetic transformations that maintain a constant implosion velocity have been used to map out the transition region between marginally-igniting and high-gain configurations. Furthermore, the results provided by the two models have been compared and it is found that – close to the ignition threshold – in order to produce the same fusion energy, the calculations performed with the BPS model require about 10% more invested energy with respect to the LP model.« less

  16. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  17. Should One Use the Ray-by-Ray Approximation in Core-Collapse Supernova Simulations?

    DOE PAGES

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.

    2016-10-28

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12-, 15-, 20-, and 25-M⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+more » approach. Employing it leads to maximum post-bounce/preexplosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25-M⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less

  18. Should One Use the Ray-by-Ray Approximation in Core-collapse Supernova Simulations?

    NASA Astrophysics Data System (ADS)

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C.

    2016-11-01

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M ⊙ progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive use of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M ⊙ progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.

  19. Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

  20. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    NASA Astrophysics Data System (ADS)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  1. The Thermal Diffusivity Measurement of the Two-layer Ceramics Using the Laser Flash Methodn

    NASA Astrophysics Data System (ADS)

    Akoshima, Megumi; Ogwa, Mitsue; Baba, Tetsuya; Mizuno, Mineo

    Ceramics-based thermal barrier coatings are used as heat and wear shields of gas turbines. There are strong needs to evaluate thermophysical properties of coating, such as thermal conductivity, thermal diffusivity and heat capacity of them. Since the coatings are attached on substrates, it is no easy to measure these properties separately. The laser flash method is one of the most popular thermal diffusivity measurement methods above room temperature for solid materials. The surface of the plate shape specimen is heated by the pulsed laser-beam, then the time variation of the temperature of the rear surface is observed by the infrared radiometer. The laser flash method is non-contact and short time measurement. In general, the thermal diffusivity of solids that are dense, homogeneous and stable, are measured by this method. It is easy to measure thermal diffusivity of a specimen which shows heat diffusion time about 1 ms to 1 s consistent with the specimen thickness of about 1 mm to 5 mm. On the other hand, this method can be applied to measure the specific heat capacity of the solids. And it is also used to estimate the thermal diffusivity of an unknown layer in the layered materials. In order to evaluate the thermal diffusivity of the coating attached on substrate, we have developed a measurement procedure using the laser flash method. The multi-layer model based on the response function method was applied to calculate the thermal diffusivity of the coating attached on substrate from the temperature history curve observed for the two-layer sample. We have verified applicability of the laser flash measurement with the multi-layer model using the measured results and the simulation. It was found that the laser flash measurement for the layered sample using the multi-layer model was effective to estimate the thermal diffusivity of an unknown layer in the sample. We have also developed the two-layer ceramics samples as the reference materials for this procedure.

  2. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  3. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  4. CRUNCH_PARALLEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumaker, Dana E.; Steefel, Carl I.

    The code CRUNCH_PARALLEL is a parallel version of the CRUNCH code. CRUNCH code version 2.0 was previously released by LLNL, (UCRL-CODE-200063). Crunch is a general purpose reactive transport code developed by Carl Steefel and Yabusake (Steefel Yabsaki 1996). The code handles non-isothermal transport and reaction in one, two, and three dimensions. The reaction algorithm is generic in form, handling an arbitrary number of aqueous and surface complexation as well as mineral dissolution/precipitation. A standardized database is used containing thermodynamic and kinetic data. The code includes advective, dispersive, and diffusive transport.

  5. A new code for modelling the near field diffusion releases from the final disposal of nuclear waste

    NASA Astrophysics Data System (ADS)

    Vopálka, D.; Vokál, A.

    2003-01-01

    The canisters with spent nuclear fuel produced during the operation of WWER reactors at the Czech power plants are planned, like in other countries, to be disposed of in an underground repository. Canisters will be surrounded by compacted bentonite that will retard the migration of safety-relevant radionuclides into the host rock. A new code that enables the modelling of the critical radionuclides transport from the canister through the bentonite layer in the cylindrical geometry was developed. The code enables to solve the diffusion equation for various types of initial and boundary conditions by means of the finite difference method and to take into account the non-linear shape of the sorption isotherm. A comparison of the code reported here with code PAGODA, which is based on analytical solution of the transport equation, was made for the actinide chain 4N+3 that includes 239Pu. A simple parametric study of the releases of 239Pu, 129I, and 14C into geosphere is discussed.

  6. Comparison of ENDF/B-VII.1 and JEFF-3.2 in VVER-1000 operational data calculation

    NASA Astrophysics Data System (ADS)

    Frybort, Jan

    2017-09-01

    Safe operation of a nuclear reactor requires an extensive calculational support. Operational data are determined by full-core calculations during the design phase of a fuel loading. Loading pattern and design of fuel assemblies are adjusted to meet safety requirements and optimize reactor operation. Nodal diffusion code ANDREA is used for this task in case of Czech VVER-1000 reactors. Nuclear data for this diffusion code are prepared regularly by lattice code HELIOS. These calculations are conducted in 2D on fuel assembly level. There is also possibility to calculate these macroscopic data by Monte-Carlo Serpent code. It can make use of alternative evaluated libraries. All calculations are affected by inherent uncertainties in nuclear data. It is useful to see results of full-core calculations based on two sets of diffusion data obtained by Serpent code calculations with ENDF/B-VII.1 and JEFF-3.2 nuclear data including also decay data library and fission yields data. The comparison is based directly on fuel assembly level macroscopic data and resulting operational data. This study illustrates effect of evaluated nuclear data library on full-core calculations of a large PWR reactor core. The level of difference which results exclusively from nuclear data selection can help to understand the level of inherent uncertainties of such full-core calculations.

  7. Effect of the diffusion parameters on the observed γ-ray spectrum of sources and their contribution to the local all-electron spectrum: The EDGE code

    NASA Astrophysics Data System (ADS)

    López-Coto, R.; Hahn, J.; BenZvi, S.; Dingus, B.; Hinton, J.; Nisa, M. U.; Parsons, R. D.; Greus, F. Salesa; Zhang, H.; Zhou, H.

    2018-11-01

    The positron excess measured by PAMELA and AMS can only be explained if there is one or several sources injecting them. Moreover, at the highest energies, it requires the presence of nearby ( ∼ hundreds of parsecs) and middle age (maximum of ∼ hundreds of kyr) sources. Pulsars, as factories of electrons and positrons, are one of the proposed candidates to explain the origin of this excess. To calculate the contribution of these sources to the electron and positron flux at the Earth, we developed EDGE (Electron Diffusion and Gamma rays to the Earth), a code to treat the propagation of electrons and compute their diffusion from a central source with a flexible injection spectrum. Using this code, we can derive the source's gamma-ray spectrum, spatial extension, the all-electron density in space, the electron and positron flux reaching the Earth and the positron fraction measured at the Earth. We present in this paper the foundations of the code and study how different parameters affect the gamma-ray spectrum of a source and the electron flux measured at the Earth. We also studied the effect of several approximations usually performed in these studies. This code has been used to derive the results of the positron flux measured at the Earth in [1].

  8. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  9. Sexual dimorphism of volume reduction but not cognitive deficit in fetal alcohol spectrum disorders: A combined diffusion tensor imaging, cortical thickness and brain volume study.

    PubMed

    Treit, Sarah; Chen, Zhang; Zhou, Dongming; Baugh, Lauren; Rasmussen, Carmen; Andrew, Gail; Pei, Jacqueline; Beaulieu, Christian

    2017-01-01

    Quantitative magnetic resonance imaging (MRI) has revealed abnormalities in brain volumes, cortical thickness and white matter microstructure in fetal alcohol spectrum disorders (FASD); however, no study has reported all three measures within the same cohort to assess the relative magnitude of deficits, and few studies have examined sex differences. Participants with FASD (n = 70; 30 females; 5-32 years) and healthy controls (n = 74; 35 females; 5-32 years) underwent cognitive testing and MRI to assess cortical thickness, regional brain volumes and fractional anisotropy (FA)/mean diffusivity (MD) of white matter tracts. A significant effect of group, age-by-group, or sex-by-group was found for 9/9 volumes, 7/39 cortical thickness regions, 3/9 white matter tracts, and 9/10 cognitive tests, indicating group differences that in some cases differ by age or sex. Volume reductions for several structures were larger in males than females, despite similar deficits of cognition in both sexes. Correlations between brain structure and cognitive scores were found in females of both groups, but were notably absent in males. Correlations within a given MRI modality (e.g. total brain volume and caudate volume) were prevalent in both the control and FASD groups, and were more numerous than correlations between measurement types (e.g. volumes and diffusion tensor imaging) in either cohort. This multi-modal MRI study finds widespread differences of brain structure in participants with prenatal alcohol exposure, and to a greater extent in males than females which may suggest attenuation of the expected process of sexual dimorphism of brain structure during typical development.

  10. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  11. Investigation of upwind, multigrid, multiblock numerical schemes for three dimensional flows. Volume 1: Runge-Kutta methods for a thin layer Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Cannizzaro, Frank E.; Ash, Robert L.

    1992-01-01

    A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.

  12. Hierarchical parallelisation of functional renormalisation group calculations - hp-fRG

    NASA Astrophysics Data System (ADS)

    Rohe, Daniel

    2016-10-01

    The functional renormalisation group (fRG) has evolved into a versatile tool in condensed matter theory for studying important aspects of correlated electron systems. Practical applications of the method often involve a high numerical effort, motivating the question in how far High Performance Computing (HPC) can leverage the approach. In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. This in turn can extend the applicability of the method to otherwise inaccessible cases. We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data). Results are provided for two distinct High Performance Computing (HPC) platforms, namely the IBM-based BlueGene/Q system JUQUEEN and an Intel Sandy-Bridge-based development cluster. We discuss how certain issues and obstacles were overcome in the course of adapting the code. Most importantly, we conclude that this vast improvement can actually be accomplished by introducing only moderate changes to the code, such that this strategy may serve as a guideline for other researcher to likewise improve the efficiency of their codes.

  13. Transport Imaging of Multi-Junction and CIGS Solar Cell Materials

    DTIC Science & Technology

    2011-12-01

    solar cells start with the material charge transport parameters, namely the charge mobility, lifetime and diffusion length . It is the goal of...every solar cell manufacturer to maintain high carrier lifetime so as to realize long diffusion lengths . Long diffusion lengths ensure that the charges...Thus, being able to accurately determine the diffusion length of any solar cell material proves advantageous by providing insights

  14. Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes

    NASA Technical Reports Server (NTRS)

    Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.

    1989-01-01

    The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.

  15. Transformation Systems at NASA Ames

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Fischer, Bernd; Havelund, Klaus; Lowry, Michael; Pressburger, TOm; Roach, Steve; Robinson, Peter; VanBaalen, Jeffrey

    1999-01-01

    In this paper, we describe the experiences of the Automated Software Engineering Group at the NASA Ames Research Center in the development and application of three different transformation systems. The systems span the entire technology range, from deductive synthesis, to logic-based transformation, to almost compiler-like source-to-source transformation. These systems also span a range of NASA applications, including solving solar system geometry problems, generating data analysis software, and analyzing multi-threaded Java code.

  16. In vivo tumor characterization using both MR and optical contrast agents with a hybrid MRI-DOT system

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; Ghijsen, Michael; Thayer, David; Nalcioglu, Orhan; Gulsen, Gultekin

    2011-03-01

    Dynamic contrast enhanced MRI (DCE-MRI) has been proven to be the most sensitive modality in detecting breast lesions. Currently available MR contrast agent, Gd-DTPA, is a low molecular weight extracellular agent and can diffuse freely from the vascular space into interstitial space. Due to this reason, DCE-MRI has low sensitivity in differentiating benign and malignant tumors. Meanwhile, diffuse optical tomography (DOT) can be used to provide enhancement kinetics of an FDA approved optical contrast agent, ICG, which behaves like a large molecular weight optical agent due to its binding to albumin. The enhancement kinetics of ICG may have a potential to distinguish between the malignant and benign tumors and hence improve the specificity. Our group has developed a high speed hybrid MRI-DOT system. The DOT is a fully automated, MR-compatible, multi-frequency and multi-spectral imaging system. Fischer-344 rats bearing subcutaneous R3230 tumor are injected simultaneously with Gd-DTPA (0.1nmol/kg) and IC-Green (2.5mg/kg). The enhancement kinetics of both contrast agents are recorded simultaneously with this hybrid MRI-DOT system and evaluated for different tumors.

  17. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  18. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Analytical performance bounds for multi-tensor diffusion-MRI.

    PubMed

    Ahmed Sid, Farid; Abed-Meraim, Karim; Harba, Rachid; Oulebsir-Boumghar, Fatima

    2017-02-01

    To examine the effects of MR acquisition parameters on brain white matter fiber orientation estimation and parameter of clinical interest in crossing fiber areas based on the Multi-Tensor Model (MTM). We compute the Cramér-Rao Bound (CRB) for the MTM and the parameter of clinical interest such as the Fractional Anisotropy (FA) and the dominant fiber orientations, assuming that the diffusion MRI data are recorded by a multi-coil, multi-shell acquisition system. Considering the sum-of-squares method for the reconstructed magnitude image, we introduce an approximate closed-form formula for Fisher Information Matrix that has the simplicity and easy interpretation advantages. In addition, we propose to generalize the FA and the mean diffusivity to the multi-tensor model. We show the application of the CRB to reduce the scan time while preserving a good estimation precision. We provide results showing how the increase of the number of acquisition coils compensates the decrease of the number of diffusion gradient directions. We analyze the impact of the b-value and the Signal-to-Noise Ratio (SNR). The analysis shows that the estimation error variance decreases with a quadratic rate with the SNR, and that the optimum b-values are not unique but depend on the target parameter, the context, and eventually the target cost function. In this study we highlight the importance of choosing the appropriate acquisition parameters especially when dealing with crossing fiber areas. We also provide a methodology for the optimal tuning of these parameters using the CRB. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. On-lattice agent-based simulation of populations of cells within the open-source Chaste framework.

    PubMed

    Figueredo, Grazziela P; Joshi, Tanvi V; Osborne, James M; Byrne, Helen M; Owen, Markus R

    2013-04-06

    Over the years, agent-based models have been developed that combine cell division and reinforced random walks of cells on a regular lattice, reaction-diffusion equations for nutrients and growth factors; and ordinary differential equations for the subcellular networks regulating the cell cycle. When linked to a vascular layer, this multiple scale model framework has been applied to tumour growth and therapy. Here, we report on the creation of an agent-based multi-scale environment amalgamating the characteristics of these models within a Virtual Physiological Human (VPH) Exemplar Project. This project enables reuse, integration, expansion and sharing of the model and relevant data. The agent-based and reaction-diffusion parts of the multi-scale model have been implemented and are available for download as part of the latest public release of Chaste (Cancer, Heart and Soft Tissue Environment; http://www.cs.ox.ac.uk/chaste/), part of the VPH Toolkit (http://toolkit.vph-noe.eu/). The environment functionalities are verified against the original models, in addition to extra validation of all aspects of the code. In this work, we present the details of the implementation of the agent-based environment, including the system description, the conceptual model, the development of the simulation model and the processes of verification and validation of the simulation results. We explore the potential use of the environment by presenting exemplar applications of the 'what if' scenarios that can easily be studied in the environment. These examples relate to tumour growth, cellular competition for resources and tumour responses to hypoxia (low oxygen levels). We conclude our work by summarizing the future steps for the expansion of the current system.

  1. Halftoning Algorithms and Systems.

    DTIC Science & Technology

    1996-08-01

    TERMS 15. NUMBER IF PAGESi. Halftoning algorithms; error diffusions ; color printing; topographic maps 16. PRICE CODE 17. SECURITY CLASSIFICATION 18...graylevels for each screen level. In the case of error diffusion algorithms, the calibration procedure using the new centering concept manifests itself as a...Novel Centering Concept for Overlapping Correction Paper / Transparency (Patent Applied 5/94)I * Applications To Error Diffusion * To Dithering (IS&T

  2. Numerical solution of a multi-ion one-potential model for electroosmotic flow in two-dimensional rectangular microchannels.

    PubMed

    Van Theemsche, Achim; Deconinck, Johan; Van den Bossche, Bart; Bortels, Leslie

    2002-10-01

    A new more general numerical model for the simulation of electrokinetic flow in rectangular microchannels is presented. The model is based on the dilute solution model and the Navier-Stokes equations and has been implemented in a finite-element-based C++ code. The model includes the ion distribution in the Helmholtz double layer and considers only one single electrical' potential field variable throughout the domain. On a charged surface(s) the surface charge density, which is proportional to the local electrical field, is imposed. The zeta potential results, then, from this boundary condition and depends on concentrations, temperature, ion valence, molecular diffusion coefficients, and geometric conditions. Validation cases show that the model predicts accurately known analytical results, also for geometries having dimensions comparable to the Debye length. As a final study, the electro-osmotic flow in a controlled cross channel is investigated.

  3. Simulation of radiation damping in rings, using stepwise ray-tracing methods

    DOE PAGES

    Meot, F.

    2015-06-26

    The ray-tracing code Zgoubi computes particle trajectories in arbitrary magnetic and/or electric field maps or analytical field models. It includes a built-in fitting procedure, spin tracking many Monte Carlo processes. The accuracy of the integration method makes it an efficient tool for multi-turn tracking in periodic machines. Energy loss by synchrotron radiation, based on Monte Carlo techniques, had been introduced in Zgoubi in the early 2000s for studies regarding the linear collider beam delivery system. However, only recently has this Monte Carlo tool been used for systematic beam dynamics and spin diffusion studies in rings, including eRHIC electron-ion collider projectmore » at the Brookhaven National Laboratory. Some beam dynamics aspects of this recent use of Zgoubi capabilities, including considerations of accuracy as well as further benchmarking in the presence of synchrotron radiation in rings, are reported here.« less

  4. Simulation of the main physical processes in remote laser penetration with large laser spot size

    DOE PAGES

    Khairallah, S. A.; Anderson, A.; Rubenchik, A. M.; ...

    2015-04-10

    A 3D model is developed to simulate remote laser penetration of a 1mm Aluminum metal sheet with large laser spot size (~3x3cm²), using the ALE3D multi-physics code. The model deals with the laser-induced melting of the plate and the mechanical interaction between the solid and the melted part through plate elastic-plastic response. The effect of plate oscillations and other forces on plate rupture, the droplet formation mechanism and the influence of gravity and high laser power in further breaking the single melt droplet into many more fragments are analyzed. In the limit of low laser power, the numerical results matchmore » the available experiments. The numerical approach couples mechanical and thermal diffusion to hydrodynamics melt flow and accounts for temperature dependent material properties, surface tension, gravity and vapor recoil pressure.« less

  5. Variability in interhospital trauma data coding and scoring: A challenge to the accuracy of aggregated trauma registries.

    PubMed

    Arabian, Sandra S; Marcus, Michael; Captain, Kevin; Pomphrey, Michelle; Breeze, Janis; Wolfe, Jennefer; Bugaev, Nikolay; Rabinovici, Reuven

    2015-09-01

    Analyses of data aggregated in state and national trauma registries provide the platform for clinical, research, development, and quality improvement efforts in trauma systems. However, the interhospital variability and accuracy in data abstraction and coding have not yet been directly evaluated. This multi-institutional, Web-based, anonymous study examines interhospital variability and accuracy in data coding and scoring by registrars. Eighty-two American College of Surgeons (ACS)/state-verified Level I and II trauma centers were invited to determine different data elements including diagnostic, procedure, and Abbreviated Injury Scale (AIS) coding as well as selected National Trauma Data Bank definitions for the same fictitious case. Variability and accuracy in data entries were assessed by the maximal percent agreement among the registrars for the tested data elements, and 95% confidence intervals were computed to compare this level of agreement to the ideal value of 100%. Variability and accuracy in all elements were compared (χ testing) based on Trauma Quality Improvement Program (TQIP) membership, level of trauma center, ACS verification, and registrar's certifications. Fifty registrars (61%) completed the survey. The overall accuracy for all tested elements was 64%. Variability was noted in all examined parameters except for the place of occurrence code in all groups and the lower extremity AIS code in Level II trauma centers and in the Certified Specialist in Trauma Registry- and Certified Abbreviated Injury Scale Specialist-certified registrar groups. No differences in variability were noted when groups were compared based on TQIP membership, level of center, ACS verification, and registrar's certifications, except for prehospital Glasgow Coma Scale (GCS), where TQIP respondents agreed more than non-TQIP centers (p = 0.004). There is variability and inaccuracy in interhospital data coding and scoring of injury information. This finding casts doubt on the validity of registry data used in all aspects of trauma care and injury surveillance.

  6. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  7. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each group are approximately balanced. A proper number of threads are initially allocated to each group, and in subsequent iterations during the run-time, the number of threads are adjusted to achieve load balancing across the processes. Each process exploits the multitasking directives already established in Overflow.

  8. Direct-to-diffuse UV Solar Irradiance Ratio for a UV rotating Shadowband Spectroradiometer and a UV Multi-filter Rotating Shadowband Radiometer

    NASA Astrophysics Data System (ADS)

    Lantz, K.; Kiedron, P.; Petropavlovskikh, I.; Michalsky, J.; Slusser, J.

    2008-12-01

    . Two spectroradiometers reside that measure direct and diffuse UV solar irradiance are located at the Table Mountain Test Facility, 8 km north of Boulder, CO. The UV- Rotating Shadowband Spectrograph (UV-RSS) measures diffuse and direct solar irradiance from 290 - 400 nm. The UV Multi-Filter Rotating Shadowband Radiometer (UV-MFRSR) measures diffuse and direct solar irradiance in seven 2-nm wide bands, i.e. 300, 305, 311, 317, 325, and 368 nm. The purpose of the work is to compare radiative transfer model calculations (TUV) with the results from the UV-Rotating Shadowband Spectroradiometer (UV-RSS) and the UV-MFRSR to estimate direct-to-diffuse solar irradiance ratios (DDR) that are used to evaluate the possibility of retrieving aerosol single scattering albedo (SSA) under a variety of atmospheric conditions: large and small aerosol loading, large and small surface albedo. For the radiative transfer calculations, total ozone measurements are obtained from a collocated Brewer spectrophotometer.

  9. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  10. Conceptual model analysis of interaction at a concrete-Boom Clay interface

    NASA Astrophysics Data System (ADS)

    Liu, Sanheng; Jacques, Diederik; Govaerts, Joan; Wang, Lian

    In many concepts for deep disposal of high-level radioactive waste, cementitious materials are used in the engineered barriers. For example, in Belgium the engineered barrier system is based on a considerable amount of cementitious materials as buffer and backfill in the so-called supercontainer embedded in the hosting geological formation. A potential hosting formation is Boom Clay. Insight in the interaction between the high-pH pore water of the cementitious materials and neutral-pH Boom Clay pore water is required. Two problems are quite common for modeling of such a system. The first one is the computational cost due to the long timescale model assessments envisaged for the deep disposal system. Also a very fine grid (in sub-millimeter), especially at interfaces has to be used in order to accurately predict the evolution of the system. The second one is whether to use equilibrium or kinetic reaction models. The objectives of this paper are twofold. First, we develop an efficient coupled reactive transport code for this diffusion-dominated system by making full use of multi-processors/cores computers. Second, we investigate how sensitive the system is to chemical reaction models especially when pore clogging due to mineral precipitation is considered within the cementitious system. To do this, we selected two portlandite dissolution models, i.e., equilibrium (fastest) and diffusion-controlled model with precipitation of a calcite layer around portlandite particles (diffusion-controlled dissolution). The results show that with shrinking core model portlandite dissolution and calcite precipitation are much slower than with the equilibrium model. Also diffusion-controlled dissolution smooths out dissolution fronts compared to the equilibrium model. However, only a slight difference with respect to the clogging time can be found even though we use a very small diffusion coefficient (10-20 m2/s) in the precipitated calcite layer.

  11. A social ecology of rectal microbicide acceptability among young men who have sex with men and transgender women in Thailand

    PubMed Central

    Newman, Peter A; Roungprakhon, Surachet; Tepjan, Suchon

    2013-01-01

    Introduction With HIV-incidence among men who have sex with men (MSM) in Bangkok among the highest in the world, a topical rectal microbicide would be a tremendous asset to prevention. Nevertheless, ubiquitous gaps between clinical trial efficacy and real-world effectiveness of existing HIV preventive interventions highlight the need to address multi-level factors that may impact on rectal microbicide implementation. We explored the social ecology of rectal microbicide acceptability among MSM and transgender women in Chiang Mai and Pattaya, Thailand. Methods We used a qualitative approach guided by a social ecological model. Five focus groups were conducted in Thai using a semi-structured interview guide. All interviews were digitally recorded, transcribed verbatim in Thai and translated into English. We conducted thematic analysis using line-by-line and axial coding and a constant comparative method. Transcripts and codes were uploaded into a customized database programmed in Microsoft Access. We then used content analysis to calculate theme frequencies by group, and Chi-square tests and Fisher's exact test to compare themes by sexual orientation/gender expression and age. Results Participant's (n=37) mean age was 24.8 years (SD=4.2). The majority (70.3%) self-identified as gay, 24.3% transgender women. Product-level themes (side effects, formulation, efficacy, scent, etc.) accounted for 42%, individual (increased sexual risk, packaging/portability, timing/duration of protection) 29%, interpersonal (trust/communication, power/negotiation, stealth) 8% and social–structural (cost, access, community influence, stigma) 21% of total codes, with significant differences by sexual orientation/gender identity. The intersections of multi-level influences included product formulation and timing of use preferences contingent on interpersonal communication and partner type, in the context of constraints posed by stigma, venues for access and cost. Discussion The intersecting influence of multi-level factors on rectal microbicide acceptability suggests that social–structural interventions to ensure widespread access, low cost and to mitigate stigma and discrimination against gay and other MSM and transgender women in the Thai health care system and broader society will support the effectiveness of rectal microbicides, in combination with other prevention technologies, in reducing HIV transmission. Education, outreach and small-group interventions that acknowledge differences between MSM and transgender women may support rectal microbicide implementation among most-at-risk populations in Thailand. PMID:23911116

  12. A social ecology of rectal microbicide acceptability among young men who have sex with men and transgender women in Thailand.

    PubMed

    Newman, Peter A; Roungprakhon, Surachet; Tepjan, Suchon

    2013-08-01

    With HIV-incidence among men who have sex with men (MSM) in Bangkok among the highest in the world, a topical rectal microbicide would be a tremendous asset to prevention. Nevertheless, ubiquitous gaps between clinical trial efficacy and real-world effectiveness of existing HIV preventive interventions highlight the need to address multi-level factors that may impact on rectal microbicide implementation. We explored the social ecology of rectal microbicide acceptability among MSM and transgender women in Chiang Mai and Pattaya, Thailand. We used a qualitative approach guided by a social ecological model. Five focus groups were conducted in Thai using a semi-structured interview guide. All interviews were digitally recorded, transcribed verbatim in Thai and translated into English. We conducted thematic analysis using line-by-line and axial coding and a constant comparative method. Transcripts and codes were uploaded into a customized database programmed in Microsoft Access. We then used content analysis to calculate theme frequencies by group, and Chi-square tests and Fisher's exact test to compare themes by sexual orientation/gender expression and age. Participant's (n=37) mean age was 24.8 years (SD=4.2). The majority (70.3%) self-identified as gay, 24.3% transgender women. Product-level themes (side effects, formulation, efficacy, scent, etc.) accounted for 42%, individual (increased sexual risk, packaging/portability, timing/duration of protection) 29%, interpersonal (trust/communication, power/negotiation, stealth) 8% and social-structural (cost, access, community influence, stigma) 21% of total codes, with significant differences by sexual orientation/gender identity. The intersections of multi-level influences included product formulation and timing of use preferences contingent on interpersonal communication and partner type, in the context of constraints posed by stigma, venues for access and cost. The intersecting influence of multi-level factors on rectal microbicide acceptability suggests that social-structural interventions to ensure widespread access, low cost and to mitigate stigma and discrimination against gay and other MSM and transgender women in the Thai health care system and broader society will support the effectiveness of rectal microbicides, in combination with other prevention technologies, in reducing HIV transmission. Education, outreach and small-group interventions that acknowledge differences between MSM and transgender women may support rectal microbicide implementation among most-at-risk populations in Thailand.

  13. Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Abarbanel, Saul; Ditkowski, Adi

    1996-01-01

    An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.

  14. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  15. High-throughput ab-initio dilute solute diffusion database.

    PubMed

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-07-19

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world.

  16. A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives

    PubMed Central

    Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai

    2013-01-01

    This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124

  17. Joint Patch and Multi-label Learning for Facial Action Unit Detection

    PubMed Central

    Zhao, Kaili; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Zhang, Honggang

    2016-01-01

    The face is one of the most powerful channel of nonverbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art. PMID:27382243

  18. Automated diagnosis of interstitial lung diseases and emphysema in MDCT imaging

    NASA Astrophysics Data System (ADS)

    Fetita, Catalin; Chang Chien, Kuang-Che; Brillet, Pierre-Yves; Prêteux, Françoise

    2007-09-01

    Diffuse lung diseases (DLD) include a heterogeneous group of non-neoplasic disease resulting from damage to the lung parenchyma by varying patterns of inflammation. Characterization and quantification of DLD severity using MDCT, mainly in interstitial lung diseases and emphysema, is an important issue in clinical research for the evaluation of new therapies. This paper develops a 3D automated approach for detection and diagnosis of diffuse lung diseases such as fibrosis/honeycombing, ground glass and emphysema. The proposed methodology combines multi-resolution 3D morphological filtering (exploiting the sup-constrained connection cost operator) and graph-based classification for a full characterization of the parenchymal tissue. The morphological filtering performs a multi-level segmentation of the low- and medium-attenuated lung regions as well as their classification with respect to a granularity criterion (multi-resolution analysis). The original intensity range of the CT data volume is thus reduced in the segmented data to a number of levels equal to the resolution depth used (generally ten levels). The specificity of such morphological filtering is to extract tissue patterns locally contrasting with their neighborhood and of size inferior to the resolution depth, while preserving their original shape. A multi-valued hierarchical graph describing the segmentation result is built-up according to the resolution level and the adjacency of the different segmented components. The graph nodes are then enriched with the textural information carried out by their associated components. A graph analysis-reorganization based on the nodes attributes delivers the final classification of the lung parenchyma in normal and ILD/emphysematous regions. It also makes possible to discriminate between different types, or development stages, among the same class of diseases.

  19. Long residence times of rapidly decomposable soil organic matter: application of a multi-phase, multi-component, and vertically-resolved model (TOUGHREACTv1) to soil carbon dynamics

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Maggi, F. M.; Kleber, M.; Torn, M. S.; Tang, J. Y.; Dwivedi, D.; Guerry, N.

    2014-01-01

    Accurate representation of soil organic matter (SOM) dynamics in Earth System Models is critical for future climate prediction, yet large uncertainties exist regarding how, and to what extent, the suite of proposed relevant mechanisms should be included. To investigate how various mechanisms interact to influence SOM storage and dynamics, we developed a SOM reaction network integrated in a one-dimensional, multi-phase, and multi-component reactive transport solver. The model includes representations of bacterial and fungal activity, multiple archetypal polymeric and monomeric carbon substrate groups, aqueous chemistry, aqueous advection and diffusion, gaseous diffusion, and adsorption (and protection) and desorption from the soil mineral phase. The model predictions reasonably matched observed depth-resolved SOM and dissolved organic carbon (DOC) stocks in grassland ecosystems as well as lignin content and fungi to aerobic bacteria ratios. We performed a suite of sensitivity analyses under equilibrium and dynamic conditions to examine the role of dynamic sorption, microbial assimilation rates, and carbon inputs. To our knowledge, observations do not exist to fully test such a complicated model structure or to test the hypotheses used to explain observations of substantial storage of very old SOM below the rooting depth. Nevertheless, we demonstrated that a reasonable combination of sorption parameters, microbial biomass and necromass dynamics, and advective transport can match observations without resorting to an arbitrary depth-dependent decline in SOM turnover rates, as is often done. We conclude that, contrary to assertions derived from existing turnover time based model formulations, observed carbon content and δ14C vertical profiles are consistent with a representation of SOM dynamics consisting of (1) carbon compounds without designated intrinsic turnover times, (2) vertical aqueous transport, and (3) dynamic protection on mineral surfaces.

  20. Long residence times of rapidly decomposable soil organic matter: application of a multi-phase, multi-component, and vertically resolved model (BAMS1) to soil carbon dynamics

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Maggi, F.; Kleber, M.; Torn, M. S.; Tang, J. Y.; Dwivedi, D.; Guerry, N.

    2014-07-01

    Accurate representation of soil organic matter (SOM) dynamics in Earth system models is critical for future climate prediction, yet large uncertainties exist regarding how, and to what extent, the suite of proposed relevant mechanisms should be included. To investigate how various mechanisms interact to influence SOM storage and dynamics, we developed an SOM reaction network integrated in a one-dimensional, multi-phase, and multi-component reactive transport solver. The model includes representations of bacterial and fungal activity, multiple archetypal polymeric and monomeric carbon substrate groups, aqueous chemistry, aqueous advection and diffusion, gaseous diffusion, and adsorption (and protection) and desorption from the soil mineral phase. The model predictions reasonably matched observed depth-resolved SOM and dissolved organic matter (DOM) stocks and fluxes, lignin content, and fungi to aerobic bacteria ratios. We performed a suite of sensitivity analyses under equilibrium and dynamic conditions to examine the role of dynamic sorption, microbial assimilation rates, and carbon inputs. To our knowledge, observations do not exist to fully test such a complicated model structure or to test the hypotheses used to explain observations of substantial storage of very old SOM below the rooting depth. Nevertheless, we demonstrated that a reasonable combination of sorption parameters, microbial biomass and necromass dynamics, and advective transport can match observations without resorting to an arbitrary depth-dependent decline in SOM turnover rates, as is often done. We conclude that, contrary to assertions derived from existing turnover time based model formulations, observed carbon content and Δ14C vertical profiles are consistent with a representation of SOM consisting of carbon compounds with relatively fast reaction rates, vertical aqueous transport, and dynamic protection on mineral surfaces.

  1. Status and future of MUSE

    NASA Astrophysics Data System (ADS)

    Harfst, S.; Portegies Zwart, S.; McMillan, S.

    2008-12-01

    We present MUSE, a software framework for combining existing computational tools from different astrophysical domains into a single multi-physics, multi-scale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly-coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a ``Noah's Ark'' milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multi-scale and multi-physics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe two examples calculated using MUSE: the merger of two galaxies and an N-body simulation with live stellar evolution. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDaniel, Jesse G.; Yethiraj, Arun

    The diffusion of protons in self-assembled systems is potentially important for the design of efficient proton exchange membranes. In this work, we study proton dynamics in a low-water content, lamellar phase of an sodium-carboxylate gemini surfactant/water system using computer simulations. The hopping of protons via the Grotthuss mechanism is explicity allowed through the multi-state empirical valence bond (MS-EVB) method. We find that the hydronium ion is trapped on the hydrophobic side of the surfactant-water interface, and proton diffusion then proceeds by hopping between surface sites. The importance of hydrophobic traps is surprising, because one would expect the hydronium ions tomore » be trapped at the charged head-groups. Finally, the physics illustrated in this system should be relevant to the proton dynamics in other amphiphilic membrane systems, whenever there exists exposed hydrophobic surface regions.« less

  3. Predicting pain relief: Use of pre-surgical trigeminal nerve diffusion metrics in trigeminal neuralgia.

    PubMed

    Hung, Peter S-P; Chen, David Q; Davis, Karen D; Zhong, Jidan; Hodaie, Mojgan

    2017-01-01

    Trigeminal neuralgia (TN) is a chronic neuropathic facial pain disorder that commonly responds to surgery. A proportion of patients, however, do not benefit and suffer ongoing pain. There are currently no imaging tools that permit the prediction of treatment response. To address this paucity, we used diffusion tensor imaging (DTI) to determine whether pre-surgical trigeminal nerve microstructural diffusivities can prognosticate response to TN treatment. In 31 TN patients and 16 healthy controls, multi-tensor tractography was used to extract DTI-derived metrics-axial (AD), radial (RD), mean diffusivity (MD), and fractional anisotropy (FA)-from the cisternal segment, root entry zone and pontine segment of trigeminal nerves for false discovery rate-corrected Student's t -tests. Ipsilateral diffusivities were bootstrap resampled to visualize group-level diffusivity thresholds of long-term response. To obtain an individual-level statistical classifier of surgical response, we conducted discriminant function analysis (DFA) with the type of surgery chosen alongside ipsilateral measurements and ipsilateral/contralateral ratios of AD and RD from all regions of interest as prediction variables. Abnormal diffusivity in the trigeminal pontine fibers, demonstrated by increased AD, highlighted non-responders (n = 14) compared to controls. Bootstrap resampling revealed three ipsilateral diffusivity thresholds of response-pontine AD, MD, cisternal FA-separating 85% of non-responders from responders. DFA produced an 83.9% (71.0% using leave-one-out-cross-validation) accurate prognosticator of response that successfully identified 12/14 non-responders. Our study demonstrates that pre-surgical DTI metrics can serve as a highly predictive, individualized tool to prognosticate surgical response. We further highlight abnormal pontine segment diffusivities as key features of treatment non-response and confirm the axiom that central pain does not commonly benefit from peripheral treatments.

  4. Sparse and Adaptive Diffusion Dictionary (SADD) for recovering intra-voxel white matter structure.

    PubMed

    Aranda, Ramon; Ramirez-Manzanares, Alonso; Rivera, Mariano

    2015-12-01

    On the analysis of the Diffusion-Weighted Magnetic Resonance Images, multi-compartment models overcome the limitations of the well-known Diffusion Tensor model for fitting in vivo brain axonal orientations at voxels with fiber crossings, branching, kissing or bifurcations. Some successful multi-compartment methods are based on diffusion dictionaries. The diffusion dictionary-based methods assume that the observed Magnetic Resonance signal at each voxel is a linear combination of the fixed dictionary elements (dictionary atoms). The atoms are fixed along different orientations and diffusivity profiles. In this work, we present a sparse and adaptive diffusion dictionary method based on the Diffusion Basis Functions Model to estimate in vivo brain axonal fiber populations. Our proposal overcomes the following limitations of the diffusion dictionary-based methods: the limited angular resolution and the fixed shapes for the atom set. We propose to iteratively re-estimate the orientations and the diffusivity profile of the atoms independently at each voxel by using a simplified and easier-to-solve mathematical approach. As a result, we improve the fitting of the Diffusion-Weighted Magnetic Resonance signal. The advantages with respect to the former Diffusion Basis Functions method are demonstrated on the synthetic data-set used on the 2012 HARDI Reconstruction Challenge and in vivo human data. We demonstrate that improvements obtained in the intra-voxel fiber structure estimations benefit brain research allowing to obtain better tractography estimations. Hence, these improvements result in an accurate computation of the brain connectivity patterns. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Non-parametric representation and prediction of single- and multi-shell diffusion-weighted MRI data using Gaussian processes

    PubMed Central

    Andersson, Jesper L.R.; Sotiropoulos, Stamatios N.

    2015-01-01

    Diffusion MRI offers great potential in studying the human brain microstructure and connectivity. However, diffusion images are marred by technical problems, such as image distortions and spurious signal loss. Correcting for these problems is non-trivial and relies on having a mechanism that predicts what to expect. In this paper we describe a novel way to represent and make predictions about diffusion MRI data. It is based on a Gaussian process on one or several spheres similar to the Geostatistical method of “Kriging”. We present a choice of covariance function that allows us to accurately predict the signal even from voxels with complex fibre patterns. For multi-shell data (multiple non-zero b-values) the covariance function extends across the shells which means that data from one shell is used when making predictions for another shell. PMID:26236030

  6. Social and diagnostic inequality in health.

    PubMed

    Bringedal, Berit; Tufte, Per Arne

    2012-11-01

    Empirical studies of social inequalities in health commonly take the diagnosing of disease for granted. Social inequalities in health are seen as the result of social processes, yet the diagnosis itself is rarely considered to contribute to such inequality. We argue that the influence of sociocultural and cognitive bias in the diagnosing process follows a social pattern, such that certain diagnoses are disproportionally over- or underrepresented in different socioeconomic groups due to interpretive bias of underlying symptoms. Norwegian data on sick leave for diffuse musculoskeletal and diffuse psychiatric disease in 2006 were analysed to study the distribution of the two diagnoses in different status groups. Socioeconomic status was measured by years of education. Diagnoses and occupational codes were based on national registers; diagnoses in accordance with the International Classification of Primary Care second edition. We compared occupations in technical sectors to occupations in the health sector and the relative number of cases of sick leave controlled for years of education, gender, occupational sector, and diagnosis. Data were analysed by cross-tabulation, ratio of diffuse psychiatric/musculoskeletal diseases, and logistic regression. The ratio of diffuse psychiatric/musculoskeletal diseases increases with education and decreases if the employee works in a technical job. The results challenge the traditional explanation that job features alone can explain the distribution of disease and suggest that a part of the persistent social inequality in health can be caused by the diagnosing process. In order to reach a better understanding of the processes behind the social inequalities in health, the diagnosing process itself should also be studied.

  7. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  8. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  9. Transient Ejector Analysis (TEA) code user's guide

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.

    1993-01-01

    A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.

  10. Electrostatic interactions between diffuse soft multi-layered (bio)particles: beyond Debye-Hückel approximation and Deryagin formulation.

    PubMed

    Duval, Jérôme F L; Merlin, Jenny; Narayana, Puranam A L

    2011-01-21

    We report a steady-state theory for the evaluation of electrostatic interactions between identical or dissimilar spherical soft multi-layered (bio)particles, e.g. microgels or microorganisms. These generally consist of a rigid core surrounded by concentric ion-permeable layers that may differ in thickness, soft material density, chemical composition and degree of dissociation for the ionogenic groups. The formalism allows the account of diffuse interphases where distributions of ionogenic groups from one layer to the other are position-dependent. The model is valid for any number of ion-permeable layers around the core of the interacting soft particles and covers all limiting situations in terms of nature of interacting particles, i.e. homo- and hetero-interactions between hard, soft or entirely porous colloids. The theory is based on a rigorous numerical solution of the non-linearized Poisson-Boltzmann equation including radial and angular distortions of the electric field distribution within and outside the interacting soft particles in approach. The Gibbs energy of electrostatic interaction is obtained from a general expression derived following the method by Verwey and Overbeek based on appropriate electric double layer charging mechanisms. Original analytical solutions are provided here for cases where interaction takes place between soft multi-layered particles whose size and charge density are in line with Deryagin treatment and Debye-Hückel approximation. These situations include interactions between hard and soft particles, hard plate and soft particle or soft plate and soft particle. The flexibility of the formalism is highlighted by the discussion of few situations which clearly illustrate that electrostatic interaction between multi-layered particles may be partly or predominantly governed by potential distribution within the most internal layers. A major consequence is that both amplitude and sign of Gibbs electrostatic interaction energy may dramatically change depending on the interplay between characteristic Debye length, thickness of ion-permeable layers and their respective protolytic features (e.g. location, magnitude and sign of charge density). This formalism extends a recent model by Ohshima which is strictly limited to interaction between soft mono-shell particles within Deryagin and Debye-Hückel approximations under conditions where ionizable sites are completely dissociated.

  11. An Evaluation of an Intervention Using Sign Language and Multi-Sensory Coding to Support Word Learning and Reading Comprehension of Deaf Signing Children

    ERIC Educational Resources Information Center

    van Staden, Annalene

    2013-01-01

    The reading skills of many deaf children lag several years behind those of hearing children, and there is a need for identifying reading difficulties and implementing effective reading support strategies in this population. This study embraces a balanced reading approach, and investigates the efficacy of applying multi-sensory coding strategies…

  12. Coupled thermo-chemical boundary conditions in double-diffusive geodynamo models at arbitrary Lewis numbers.

    NASA Astrophysics Data System (ADS)

    Bouffard, M.

    2016-12-01

    Convection in the Earth's outer core is driven by the combination of two buoyancy sources: a thermal source directly related to the Earth's secular cooling, the release of latent heat and possibly the heat generated by radioactive decay, and a compositional source due to the crystallization of the growing inner core which releases light elements into the liquid outer core. The dynamics of fusion/crystallization being dependent on the heat flux distribution, the thermochemical boundary conditions are coupled at the inner core boundary which may affect the dynamo in various ways, particularly if heterogeneous conditions are imposed at one boundary. In addition, the thermal and compositional molecular diffusivities differ by three orders of magnitude. This can produce significant differences in the convective dynamics compared to pure thermal or compositional convection due to the potential occurence of double-diffusive phenomena. Traditionally, temperature and composition have been combined into one single variable called codensity under the assumption that turbulence mixes all physical properties at an "eddy-diffusion" rate. This description does not allow for a proper treatment of the thermochemical coupling and is certainly incorrect within stratified layers in which double-diffusive phenomena can be expected. For a more general and rigorous approach, two distinct transport equations should therefore be solved for temperature and composition. However, the weak compositional diffusivity is technically difficult to handle in current geodynamo codes and requires the use of a semi-Lagrangian description to minimize numerical diffusion. We implemented a "particle-in-cell" method into a geodynamo code to properly describe the compositional field. The code is suitable for High Parallel Computing architectures and was successfully tested on two benchmarks. Following the work by Aubert et al. (2008) we use this new tool to perform dynamo simulations including thermochemical coupling at the inner core boundary as well as exploration of the infinite Lewis number limit to study the effect of a heterogeneous core mantle boundary heat flow on the inner core growth.

  13. SHOULD ONE USE THE RAY-BY-RAY APPROXIMATION IN CORE-COLLAPSE SUPERNOVA SIMULATIONS?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, M. Aaron; Burrows, Adam; Dolence, Joshua C., E-mail: burrows@astro.princeton.edu, E-mail: askinner@astro.princeton.edu, E-mail: jdolence@lanl.gov

    2016-11-01

    We perform the first self-consistent, time-dependent, multi-group calculations in two dimensions (2D) to address the consequences of using the ray-by-ray+ transport simplification in core-collapse supernova simulations. Such a dimensional reduction is employed by many researchers to facilitate their resource-intensive calculations. Our new code (Fornax) implements multi-D transport, and can, by zeroing out transverse flux terms, emulate the ray-by-ray+ scheme. Using the same microphysics, initial models, resolution, and code, we compare the results of simulating 12, 15, 20, and 25 M {sub ⊙} progenitor models using these two transport methods. Our findings call into question the wisdom of the pervasive usemore » of the ray-by-ray+ approach. Employing it leads to maximum post-bounce/pre-explosion shock radii that are almost universally larger by tens of kilometers than those derived using the more accurate scheme, typically leaving the post-bounce matter less bound and artificially more “explodable.” In fact, for our 25 M {sub ⊙} progenitor, the ray-by-ray+ model explodes, while the corresponding multi-D transport model does not. Therefore, in two dimensions, the combination of ray-by-ray+ with the axial sloshing hydrodynamics that is a feature of 2D supernova dynamics can result in quantitatively, and perhaps qualitatively, incorrect results.« less

  14. Lung morphometry using hyperpolarized 129 Xe multi-b diffusion MRI with compressed sensing in healthy subjects and patients with COPD.

    PubMed

    Zhang, Huiting; Xie, Junshuai; Xiao, Sa; Zhao, Xiuchao; Zhang, Ming; Shi, Lei; Wang, Ke; Wu, Guangyao; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2018-05-04

    To demonstrate the feasibility of compressed sensing (CS) to accelerate the acquisition of hyperpolarized (HP) 129 Xe multi-b diffusion MRI for quantitative assessments of lung microstructural morphometry. Six healthy subjects and six chronic obstructive pulmonary disease (COPD) subjects underwent HP 129 Xe multi-b diffusion MRI (b = 0, 10, 20, 30, and 40 s/cm 2 ). First, a fully sampled (FS) acquisition of HP 129 Xe multi-b diffusion MRI was conducted in one healthy subject. The acquired FS dataset was retrospectively undersampled in the phase encoding direction, and an optimal twofold undersampled pattern was then obtained by minimizing mean absolute error (MAE) between retrospective CS (rCS) and FS MR images. Next, the FS and CS acquisitions during separate breath holds were performed on five healthy subjects (including the above one). Additionally, the FS and CS synchronous acquisitions during a single breath hold were performed on the sixth healthy subject and one COPD subject. However, only CS acquisitions were conducted in the rest of the five COPD subjects. Finally, all the acquired FS, rCS and CS MR images were used to obtain morphometric parameters, including acinar duct radius (R), acinar lumen radius (r), alveolar sleeve depth (h), mean linear intercept (L m ), and surface-to-volume ratio (SVR). The Wilcoxon signed-rank test and the Bland-Altman plot were employed to assess the fidelity of the CS reconstruction. Moreover, the t-test was used to demonstrate the effectiveness of the multi-b diffusion MRI with CS in clinical applications. The retrospective results demonstrated that there was no statistically significant difference between rCS and FS measurements using the Wilcoxon signed-rank test (P > 0.05). Good agreement between measurements obtained with the CS and FS acquisitions during separate breath holds was demonstrated in Bland-Altman plots of slice differences. Specifically, the mean biases of the R, r, h, L m , and SVR between the CS and FS acquisitions were 1.0%, 2.6%, -0.03%, 1.5%, and -5.5%, respectively. Good agreement between measurements with the CS and FS acquisitions was also observed during the single breath-hold experiments. Furthermore, there were significant differences between the morphometric parameters for the healthy and COPD subjects (P < 0.05). Our study has shown that HP 129 Xe multi-b diffusion MRI with CS could be beneficial in lung microstructural assessments by acquiring less data while maintaining the consistent results with the FS acquisitions. © 2018 American Association of Physicists in Medicine.

  15. Influence of the Numerical Scheme on the Solution Quality of the SWE for Tsunami Numerical Codes: The Tohoku-Oki, 2011Example.

    NASA Astrophysics Data System (ADS)

    Reis, C.; Clain, S.; Figueiredo, J.; Baptista, M. A.; Miranda, J. M. A.

    2015-12-01

    Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.Numerical tools turn to be very important for scenario evaluations of hazardous phenomena such as tsunami. Nevertheless, the predictions highly depends on the numerical tool quality and the design of efficient numerical schemes still receives important attention to provide robust and accurate solutions. In this study we propose a comparative study between the efficiency of two volume finite numerical codes with second-order discretization implemented with different method to solve the non-conservative shallow water equations, the MUSCL (Monotonic Upstream-Centered Scheme for Conservation Laws) and the MOOD methods (Multi-dimensional Optimal Order Detection) which optimize the accuracy of the approximation in function of the solution local smoothness. The MUSCL is based on a priori criteria where the limiting procedure is performed before updated the solution to the next time-step leading to non-necessary accuracy reduction. On the contrary, the new MOOD technique uses a posteriori detectors to prevent the solution from oscillating in the vicinity of the discontinuities. Indeed, a candidate solution is computed and corrections are performed only for the cells where non-physical oscillations are detected. Using a simple one-dimensional analytical benchmark, 'Single wave on a sloping beach', we show that the classical 1D shallow-water system can be accurately solved with the finite volume method equipped with the MOOD technique and provide better approximation with sharper shock and less numerical diffusion. For the code validation, we also use the Tohoku-Oki 2011 tsunami and reproduce two DART records, demonstrating that the quality of the solution may deeply interfere with the scenario one can assess. This work is funded by the Portugal-France research agreement, through the research project GEONUM FCT-ANR/MAT-NAN/0122/2012.

  16. High-throughput ab-initio dilute solute diffusion database

    PubMed Central

    Wu, Henry; Mayeshiba, Tam; Morgan, Dane

    2016-01-01

    We demonstrate automated generation of diffusion databases from high-throughput density functional theory (DFT) calculations. A total of more than 230 dilute solute diffusion systems in Mg, Al, Cu, Ni, Pd, and Pt host lattices have been determined using multi-frequency diffusion models. We apply a correction method for solute diffusion in alloys using experimental and simulated values of host self-diffusivity. We find good agreement with experimental solute diffusion data, obtaining a weighted activation barrier RMS error of 0.176 eV when excluding magnetic solutes in non-magnetic alloys. The compiled database is the largest collection of consistently calculated ab-initio solute diffusion data in the world. PMID:27434308

  17. Wavelength Coded Image Transmission and Holographic Optical Elements.

    DTIC Science & Technology

    1984-08-20

    system has been designed and built for transmitting images of diffusely reflecting objects through optical fibers and displaying those images at a...passive components at the end of a fiber-optic designed to transmit high-resolution images of diffusely imaging system as described in this paper... designing a system for viewing diffusely reflecting The authors are with University of Minnesota. Electrical Engi- objects, one must consider that a

  18. Evaluation of the efficacy of an appeasing pheromone diffuser product vs placebo for management of feline aggression in multi-cat households: a pilot study.

    PubMed

    DePorter, Theresa L; Bledsoe, David L; Beck, Alexandra; Ollivier, Elodie

    2018-05-01

    Objectives Aggression and social tension among housemate cats is common and puts cats at risk of injury or relinquishment. The aim of this study was to evaluate the effectiveness of a new pheromone product in reducing aggression between housemate cats. Methods A new pheromone product (Feliway Friends) containing a proprietary cat-appeasing pheromone was evaluated for efficacy in reducing aggression between housemate cats via a randomized, double-blind, placebo-controlled pilot trial of 45 multi-cat households (pheromone [n = 20], placebo [n = 25]) reporting aggression for at least 2 weeks. Each household had 2-5 cats. Participants attended an educational training meeting on day (D) -7 and the veterinary behaviorist described behaviors to be monitored for 7 weeks using the Oakland Feline Social Interaction Scale (OFSIS), which assessed the frequency and intensity of 12 representative aggressive interactions. Participants were also provided with instructions for handling aggressive events, including classical conditioning, redirection by positive reinforcement and not punishing or startling the cat for aggressive displays. Punishment techniques were strongly discouraged. Plug-in diffusers with the pheromone product or placebo were utilized from D0-D28. Participants completed a daily diary of aggressive events and weekly OFSIS assessments through to D42. Results Evolution of the OFSIS-Aggression score according to treatment group in the full analysis set population revealed a significant effect on time and treatment group. The OFSIS-Aggression score decreased over time from D0-D28 in both groups (time factor P = 0.0001) with a significant difference in favor of the verum P = 0.06); similar results were found considering the D0-D42 period (time factor P = 0.0001 [D0] and P = 0.04 [D42]). Conclusions and relevance The OFSIS provided a quantifiable measure of the frequency and intensity of 12 inter-cat interactions reflecting conflict between cats. The cat-appeasing pheromone is a promising treatment for the management of aggression between housemate cats in multi-cat households.

  19. Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver

    NASA Astrophysics Data System (ADS)

    Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.

    2011-11-01

    FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.

  20. Effects of interstage diffuser flow distortion on the performance of a 15.41-centimeter tip diameter axial power turbine

    NASA Technical Reports Server (NTRS)

    Mclallin, K. L.; Kofskey, M. G.; Civinskas, K. C.

    1983-01-01

    The performance of a variable-area stator, axial flow power turbine was determined in a cold-air component research rig for two inlet duct configurations. The two ducts were an interstage diffuser duct and an accelerated-flow inlet duct which produced stator inlet boundary layer flow blockages of 11 percent and 3 percent, respectively. Turbine blade total efficiency at design point was measured to be 5.3 percent greater with the accelerated-flow inlet duct installed due to the reduction in inlet blockage. Blade component measurements show that of this performance improvement, 35 percent occurred in the stator and 65 percent occurred in the rotor. Analysis of inlet duct internal flow using an Axisymmetric Diffuser Duct Code (ADD Code) were in substantial agreement with the test data.

  1. BigMouth: a multi-institutional dental data repository.

    PubMed

    Walji, Muhammad F; Kalenderian, Elsbeth; Stark, Paul C; White, Joel M; Kookal, Krishna K; Phan, Dat; Tran, Duong; Bernstam, Elmer V; Ramoni, Rachel

    2014-01-01

    Few oral health databases are available for research and the advancement of evidence-based dentistry. In this work we developed a centralized data repository derived from electronic health records (EHRs) at four dental schools participating in the Consortium of Oral Health Research and Informatics. A multi-stakeholder committee developed a data governance framework that encouraged data sharing while allowing control of contributed data. We adopted the i2b2 data warehousing platform and mapped data from each institution to a common reference terminology. We realized that dental EHRs urgently need to adopt common terminologies. While all used the same treatment code set, only three of the four sites used a common diagnostic terminology, and there were wide discrepancies in how medical and dental histories were documented. BigMouth was successfully launched in August 2012 with data on 1.1 million patients, and made available to users at the contributing institutions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  3. 10 CFR 434.99 - Explanation of numbering system for codes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Explanation of numbering system for codes. 434.99 Section 434.99 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS § 434.99 Explanation of numbering system for codes. (a) For...

  4. 1D Resonance line Broadened Quasilinear (RBQ1D) code for fast ion Alfvenic relaxations and its validations

    NASA Astrophysics Data System (ADS)

    Gorelenkov, Nikolai; Duarte, Vinicius; Podesta, Mario

    2017-10-01

    The performance of the burning plasma can be limited by the requirements to confine the superalfvenic fusion products which are capable of resonating with the Alfvénic eigenmodes (AEs). The effect of AEs on fast ions is evaluated using the quasi-linear approach [Berk et al., Ph.Plasmas'96] generalized for this problem recently [Duarte et al., Ph.D.'17]. The generalization involves the resonance line broadened interaction regions with the diffusion coefficient prescribed to find the evolution of the velocity distribution function. The baseline eigenmode structures are found using the NOVA-K code perturbatively [Gorelenkov et al., Ph.Plasmas'99]. A RBQ1D code allowing the diffusion in radial direction is presented here. The wave particle interaction can be reduced to one-dimensional dynamics where for the Alfvénic modes typically the particle kinetic energy is nearly constant. Hence to a good approximation the Quasi-Linear (QL) diffusion equation only contains derivatives in the angular momentum. The diffusion equation is then one dimensional that is efficiently solved simultaneously for all particles with the equation for the evolution of the wave angular momentum. The RBQ1D is validated against recent DIIID results [Collins et al., PRL'16]. Supported by the US Department of Energy under DE-AC02-09CH11466.

  5. Modeling and Simulation of Lab-on-a-Chip Systems

    DTIC Science & Technology

    2005-08-12

    complex chip geometries (including multiple turns). Variations of sample concentration profiles in laminar diffusion-based micromixers are also derived...CHAPTER 6 MODELING OF LAMINAR DIFFUSION-BASED COMPLEX ELECTROKINETIC PASSIVE MICROMIXERS ...140 6.4.4 Multi-Stream (Inter-Digital) Micromixers

  6. Numerical study of supersonic combustors by multi-block grids with mismatched interfaces

    NASA Technical Reports Server (NTRS)

    Moon, Young J.

    1990-01-01

    A three dimensional, finite rate chemistry, Navier-Stokes code was extended to a multi-block code with mismatched interface for practical calculations of supersonic combustors. To ensure global conservation, a conservative algorithm was used for the treatment of mismatched interfaces. The extended code was checked against one test case, i.e., a generic supersonic combustor with transverse fuel injection, examining solution accuracy, convergence, and local mass flux error. After testing, the code was used to simulate the chemically reacting flow fields in a scramjet combustor with parallel fuel injectors (unswept and swept ramps). Computational results were compared with experimental shadowgraph and pressure measurements. Fuel-air mixing characteristics of the unswept and swept ramps were compared and investigated.

  7. Mass Conservation in Modeling Moisture Diffusion in Multi-Layer Carbon Composite Structures

    NASA Technical Reports Server (NTRS)

    Nurge, Mark A.; Youngquist, Robert C.; Starr, Stanley O.

    2009-01-01

    Moisture diffusion in multi-layer carbon composite structures is difficult to model using finite difference methods due to the discontinuity in concentrations between adjacent layers of differing materials. Applying a mass conserving approach at these boundaries proved to be effective at accurately predicting moisture uptake for a sample exposed to a fixed temperature and relative humidity. Details of the model developed are presented and compared with actual moisture uptake data gathered over 130 days from a graphite epoxy composite sandwich coupon with a Rohacell foam core.

  8. The layer boundary effect on multi-layer mesoporous TiO 2 film based dye sensitized solar cells

    DOE PAGES

    Xu, Feng; Zhu, Kai; Zhao, Yixin

    2016-10-10

    Multi-layer mesoporous TiO 2 prepared by screen printing is widely used for fabrication of high-efficiency dye-sensitized solar cells (DSSCs). Here, we compare the three types of ~10 um thick mesoporous TiO 2 films, which were screen printed as 1-, 2- and 4-layers using the same TiO 2 nanocrystal paste. The layer boundary of the multi-layer mesoporous TiO 2 films was observed in the cross-section SEM. The existence of a layer boundary could reduce the photoelectron diffusion length with the increase of layer number. However, the photoelectron diffusion lengths of the Z907 dye sensitized solar cells based on these different layeredmore » mesoporous TiO 2 films are all longer than the film thickness. Consequently, the photovoltaic performance seems to have little dependence on the layer number of the multi-layer TiO 2 based DSSCs.« less

  9. The layer boundary effect on multi-layer mesoporous TiO 2 film based dye sensitized solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feng; Zhu, Kai; Zhao, Yixin

    Multi-layer mesoporous TiO 2 prepared by screen printing is widely used for fabrication of high-efficiency dye-sensitized solar cells (DSSCs). Here, we compare the three types of ~10 um thick mesoporous TiO 2 films, which were screen printed as 1-, 2- and 4-layers using the same TiO 2 nanocrystal paste. The layer boundary of the multi-layer mesoporous TiO 2 films was observed in the cross-section SEM. The existence of a layer boundary could reduce the photoelectron diffusion length with the increase of layer number. However, the photoelectron diffusion lengths of the Z907 dye sensitized solar cells based on these different layeredmore » mesoporous TiO 2 films are all longer than the film thickness. Consequently, the photovoltaic performance seems to have little dependence on the layer number of the multi-layer TiO 2 based DSSCs.« less

  10. High-Resolution Multi-Shot Spiral Diffusion Tensor Imaging with Inherent Correction of Motion-Induced Phase Errors

    PubMed Central

    Truong, Trong-Kha; Guidon, Arnaud

    2014-01-01

    Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457

  11. Multi-compartment microscopic diffusion imaging

    PubMed Central

    Kaden, Enrico; Kelm, Nathaniel D.; Carson, Robert P.; Does, Mark D.; Alexander, Daniel C.

    2017-01-01

    This paper introduces a multi-compartment model for microscopic diffusion anisotropy imaging. The aim is to estimate microscopic features specific to the intra- and extra-neurite compartments in nervous tissue unconfounded by the effects of fibre crossings and orientation dispersion, which are ubiquitous in the brain. The proposed MRI method is based on the Spherical Mean Technique (SMT), which factors out the neurite orientation distribution and thus provides direct estimates of the microscopic tissue structure. This technique can be immediately used in the clinic for the assessment of various neurological conditions, as it requires only a widely available off-the-shelf sequence with two b-shells and high-angular gradient resolution achievable within clinically feasible scan times. To demonstrate the developed method, we use high-quality diffusion data acquired with a bespoke scanner system from the Human Connectome Project. This study establishes the normative values of the new biomarkers for a large cohort of healthy young adults, which may then support clinical diagnostics in patients. Moreover, we show that the microscopic diffusion indices offer direct sensitivity to pathological tissue alterations, exemplified in a preclinical animal model of Tuberous Sclerosis Complex (TSC), a genetic multi-organ disorder which impacts brain microstructure and hence may lead to neurological manifestations such as autism, epilepsy and developmental delay. PMID:27282476

  12. An assessment of optical and biogeochemical multi-decadal trends in the Sargasso Sea

    NASA Astrophysics Data System (ADS)

    Allen, J. G.; Siegel, D.; Nelson, N. B.

    2016-02-01

    Observations of optical and biogeochemical data, made as part of the Bermuda Bio-Optics Project (BBOP) at the Bermuda Atlantic Time-series Study (BATS) site in the Sargasso Sea, allow for the examination of temporal trends in vertical light attenuation and their potential controls. Trends in both the magnitude and spectral slope of the diffuse attenuation coefficient should reflect changes in chlorophyll and chromophoric dissolved organic matter (CDOM) concentrations in the Sargasso Sea. The length and methodological consistency of this time series provides an excellent opportunity to extend analyses of seasonal cycles of apparent optical properties to interannual and multi-year time scales. Here, we characterize changes in the size and shape of diffuse attenuation coefficient spectra and compare them to temperature, chlorophyll a concentration, and to discrete measurements of phytoplankton and CDOM absorption. The time series analyses reveal up to a 1.2% annual increase of the magnitude of the diffuse attenuation coefficient over the upper 70 m of the water column while showing no significant change in the spectral slope of diffuse attenuation over the course of the study. These observations indicate that increases in phytoplankton pigment concentration rather than changes in CDOM are the primary driver for the attenuation trends on multi-year timescales for this region.

  13. Pediatric Pulmonary Hemorrhage vs. Extrapulmonary Bleeding in the Differential Diagnosis of Hemoptysis.

    PubMed

    Vaiman, Michael; Klin, Baruch; Rosenfeld, Noa; Abu-Kishk, Ibrahim

    2017-01-01

    Hemoptysis is an important symptom which causes a major concern, and warrants immediate diagnostic attention. The authors compared a group of patients with pediatric pulmonary hemorrhage with pediatric patients diagnosed with extrapulmonary bleeding focusing on differences in etiology, outcome and differential diagnosis of hemoptysis. We performed the retrospective analysis of medical charts of 134 pediatric patients admitted to the Emergency Department because of pulmonary and extrapulmonary hemorrhage and were diagnosed with suspected hemoptysis or developed hemoptysis (ICD10-CM code R04.2). The cases with pulmonary hemorrhage (Group 1) were compared with cases of extrapulmonary bleeding (Group 2) using the Fisher Exact test or Pearson's χ 2 test for categorical variables. The t-test was used to assess differences between continuous variables of the patients in the two groups. Bloody cough was the presenting symptom in 73.9% of cases. 30 patients had pulmonary hemorrhage (Group 1), while 104 patients had extrapulmonary bleeding (Group 2). The underlying causes of bleeding in Group 2 included epistaxis, inflammatory diseases of nasopharynx and larynx, foreign bodies, gingivitis, and hypertrophy of adenoids. Mortality rate was 10% in Group 1, whereas Group 2 did not have any mortality outcomes during the observation period. Etiologycal factors were significantly different between hemoptysis and extrapulmonary bleeding in children. Our research suggested that pulmonary and extrapulmonary bleeding are two conditions that differ significantly and cannot be unified under one diagnostic code. It is important to differentiate between focal and diffuse cases, and between pulmonary and extrapulmonary hemorrhage due to the diversity of clinical courses and outcomes.

  14. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  15. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  16. Biosorption of metal ions using a low cost modified adsorbent (Mauritia flexuosa): experimental design and mathematical modeling.

    PubMed

    Melo, Diego de Quadros; Vidal, Carla Bastos; Medeiros, Thiago Coutinho; Raulino, Giselle Santiago Cabral; Dervanoski, Adriana; Pinheiro, Márcio do Carmo; Nascimento, Ronaldo Ferreira do

    2016-09-01

    Buriti fibers were subjected to an alkaline pre-treatment and tested as an adsorbent to investigate the adsorption of copper, cadmium, lead and nickel in mono- and multi-element aqueous solutions, the results showed an increase in the adsorption capacity compared to the unmodified Buriti fiber. The effects of pH, adsorbent mass, agitation rate and initial metal ions concentration on the efficiency of the adsorption process were studied using a fractional 2(4-1) factorial design, and the results showed that all four parameters influenced metal adsorption differently. Fourier transform infrared spectrometry and X-ray fluorescence analysis were used to identify the groups that participated in the adsorption process and suggest its mechanisms and they indicated the probable mechanisms involved in the adsorption process are mainly ion exchange. Kinetic and thermodynamic equilibrium parameters were determined. The adsorption kinetics were adjusted to the homogeneous diffusion model. The adsorption equilibrium was reached in 30 min for Cu(2+) and Pb(2+), 20 min for Ni(2+) and instantaneously for Cd(2+). The results showed a significant difference was found in the competitiveness for the adsorption sites. A mathematical model was used to simulate the breakthrough curves in multi-element column adsorption considering the influences of external mass transfer and intraparticle diffusion resistance.

  17. A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Willert, Jeffrey; Park, H.; Knoll, D. A.

    2014-10-01

    Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.

  18. Rocketdyne/Westinghouse nuclear thermal rocket engine modeling

    NASA Technical Reports Server (NTRS)

    Glass, James F.

    1993-01-01

    The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.

  19. Optimized and secure technique for multiplexing QR code images of single characters: application to noiseless messages retrieval

    NASA Astrophysics Data System (ADS)

    Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto

    2015-08-01

    We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.

  20. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1989-01-01

    Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.

  1. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement

    PubMed Central

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-01-01

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro–meso–scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy–enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy–enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates. PMID:28869520

  2. Multi-Scale Low-Entropy Method for Optimizing the Processing Parameters during Automated Fiber Placement.

    PubMed

    Han, Zhenyu; Sun, Shouzheng; Fu, Hongya; Fu, Yunzhong

    2017-09-03

    Automated fiber placement (AFP) process includes a variety of energy forms and multi-scale effects. This contribution proposes a novel multi-scale low-entropy method aiming at optimizing processing parameters in an AFP process, where multi-scale effect, energy consumption, energy utilization efficiency and mechanical properties of micro-system could be taken into account synthetically. Taking a carbon fiber/epoxy prepreg as an example, mechanical properties of macro-meso-scale are obtained by Finite Element Method (FEM). A multi-scale energy transfer model is then established to input the macroscopic results into the microscopic system as its boundary condition, which can communicate with different scales. Furthermore, microscopic characteristics, mainly micro-scale adsorption energy, diffusion coefficient entropy-enthalpy values, are calculated under different processing parameters based on molecular dynamics method. Low-entropy region is then obtained in terms of the interrelation among entropy-enthalpy values, microscopic mechanical properties (interface adsorbability and matrix fluidity) and processing parameters to guarantee better fluidity, stronger adsorption, lower energy consumption and higher energy quality collaboratively. Finally, nine groups of experiments are carried out to verify the validity of the simulation results. The results show that the low-entropy optimization method can reduce void content effectively, and further improve the mechanical properties of laminates.

  3. Harmonizing DTI measurements across scanners to examine the development of white matter microstructure in 803 adolescents of the NCANDA study.

    PubMed

    Pohl, Kilian M; Sullivan, Edith V; Rohlfing, Torsten; Chu, Weiwei; Kwon, Dongjin; Nichols, B Nolan; Zhang, Yong; Brown, Sandra A; Tapert, Susan F; Cummins, Kevin; Thompson, Wesley K; Brumback, Ty; Colrain, Ian M; Baker, Fiona C; Prouty, Devin; De Bellis, Michael D; Voyvodic, James T; Clark, Duncan B; Schirda, Claudiu; Nagel, Bonnie J; Pfefferbaum, Adolf

    2016-04-15

    Neurodevelopment continues through adolescence, with notable maturation of white matter tracts comprising regional fiber systems progressing at different rates. To identify factors that could contribute to regional differences in white matter microstructure development, large samples of youth spanning adolescence to young adulthood are essential to parse these factors. Recruitment of adequate samples generally relies on multi-site consortia but comes with the challenge of merging data acquired on different platforms. In the current study, diffusion tensor imaging (DTI) data were acquired on GE and Siemens systems through the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA), a multi-site study designed to track the trajectories of regional brain development during a time of high risk for initiating alcohol consumption. This cross-sectional analysis reports baseline Tract-Based Spatial Statistic (TBSS) of regional fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (L1), and radial diffusivity (LT) from the five consortium sites on 671 adolescents who met no/low alcohol or drug consumption criteria and 132 adolescents with a history of exceeding consumption criteria. Harmonization of DTI metrics across manufacturers entailed the use of human-phantom data, acquired multiple times on each of three non-NCANDA participants at each site's MR system, to determine a manufacturer-specific correction factor. Application of the correction factor derived from human phantom data measured on MR systems from different manufacturers reduced the standard deviation of the DTI metrics for FA by almost a half, enabling harmonization of data that would have otherwise carried systematic error. Permutation testing supported the hypothesis of higher FA and lower diffusivity measures in older adolescents and indicated that, overall, the FA, MD, and L1 of the boys were higher than those of the girls, suggesting continued microstructural development notable in the boys. The contribution of demographic and clinical differences to DTI metrics was assessed with General Additive Models (GAM) testing for age, sex, and ethnicity differences in regional skeleton mean values. The results supported the primary study hypothesis that FA skeleton mean values in the no/low-drinking group were highest at different ages. When differences in intracranial volume were covaried, FA skeleton mean reached a maximum at younger ages in girls than boys and varied in magnitude with ethnicity. Our results, however, did not support the hypothesis that youth who exceeded exposure criteria would have lower FA or higher diffusivity measures than the no/low-drinking group; detecting the effects of excessive alcohol consumption during adolescence on DTI metrics may require longitudinal study. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Harmonizing DTI Measurements across Scanners to Examine the Development of White Matter Microstructure in 803 Adolescents of the NCANDA Study

    PubMed Central

    Pohl, Kilian M.; Sullivan, Edith V.; Rohlfing, Torsten; Chu, Weiwei; Kwon, Dongjin; Nichols, B. Nolan; Zhang, Yong; Brown, Sandra A.; Tapert, Susan F.; Cummins, Kevin; Thompson, Wesley K.; Brumback, Ty; Colrain, Ian M.; Baker, Fiona C.; Prouty, Devin; De Bellis, Michael D.; Voyvodic, James T.; Clark, Duncan B.; Schirda, Claudiu; Nagel, Bonnie J.; Pfefferbaum, Adolf

    2016-01-01

    Neurodevelopment continues through adolescence, with notable maturation of white matter tracts comprising regional fiber systems progressing at different rates. To identify factors that could contribute to regional differences in white matter microstructure development, large samples of youth spanning adolescence to young adulthood are essential to parse these factors. Recruitment of adequate samples generally relies on multi-site consortia but comes with the challenge of merging data acquired on different platforms. In the current study, diffusion tensor imaging (DTI) data were acquired on GE and Siemens systems through the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA), a multi-site study designed to track the trajectories of regional brain development during a time of high risk for initiating alcohol consumption. This cross-sectional analysis reports baseline Tract-Based Spatial Statistic (TBSS) of regional fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (L1), and radial diffusivity (LT) from the five consortium sites on 671 adolescents who met no/low alcohol or drug consumption criteria and 132 adolescents with a history of exceeding consumption criteria. Harmonization of DTI metrics across manufacturers entailed the use of human-phantom data, acquired multiple times on each of three non-NCANDA participants at each site’s MR system, to determine a manufacturer-specific correction factor. Application of the correction factor derived from human phantom data measured on MR systems from different manufacturers reduced the standard deviation of the DTI metrics for FA by almost a half, enabling harmonization of data that would have otherwise carried systematic error. Permutation testing supported the hypothesis of higher FA and lower diffusivity measures in older adolescents and indicated that, overall, the FA, MD, and L1 of the boys was higher than that of the girls, suggesting continued microstructural development notable in the boys. The contribution of demographic and clinical differences to DTI metrics was assessed with General Additive Models (GAM) testing for age, sex, and ethnicity differences in regional skeleton mean values. The results supported the primary study hypothesis that FA skeleton mean values in the no/low-drinking group were highest at different ages. When differences in intracranial volume were covaried, FA skeleton mean reached a maximum at younger ages in girls than boys and varied in magnitude with ethnicity. Our results, however, did not support the hypothesis that youth who exceeded exposure criteria would have lower FA or higher diffusivity measures than the no/low-drinking group; detecting the effects of excessive alcohol consumption during adolescence on DTI metrics may require longitudinal study. PMID:26872408

  5. Predicting the Performance of an Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1986-01-01

    Stage-stacking computer code (STGSTK) developed for predicting off-design performance of multi-stage axial-flow compressors. Code uses meanline stagestacking method. Stage and cumulative compressor performance calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. Numerous options available within code. Code developed so user modify correlations to suit their needs.

  6. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  7. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  8. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  9. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  10. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  11. Improved olefinic fat suppression in skeletal muscle DTI using a magnitude-based dixon method.

    PubMed

    Burakiewicz, Jedrzej; Hooijmans, Melissa T; Webb, Andrew G; Verschuuren, Jan J G M; Niks, Erik H; Kan, Hermien E

    2018-01-01

    To develop a method of suppressing the multi-resonance fat signal in diffusion-weighted imaging of skeletal muscle. This is particularly important when imaging patients with muscular dystrophies, a group of diseases which cause gradual replacement of muscle tissue by fat. The signal from the olefinic fat peak at 5.3 ppm can significantly confound diffusion-tensor imaging measurements. Dixon olefinic fat suppression (DOFS), a magnitude-based chemical-shift-based method of suppressing the olefinic peak, is proposed. It is verified in vivo by performing diffusion tensor imaging (DTI)-based quantification in the lower leg of seven healthy volunteers, and compared to two previously described fat-suppression techniques in regions with and without fat contamination. In the region without fat contamination, DOFS produces similar results to existing techniques, whereas in muscle contaminated by subcutaneous fat signal moved due to the chemical shift artefact, it consistently showed significantly higher (P = 0.018) mean diffusivity (MD). Because fat presence lowers MD, this suggests improved fat suppression. DOFS offers superior fat suppression and enhances quantitative measurements in the muscle in the presence of fat. DOFS is an alternative to spectral olefinic fat suppression. Magn Reson Med 79:152-159, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  13. TOPLHA and ALOHA: comparison between Lower Hybrid wave coupling codes

    NASA Astrophysics Data System (ADS)

    Meneghini, Orso; Hillairet, J.; Goniche, M.; Bilato, R.; Voyer, D.; Parker, R.

    2008-11-01

    TOPLHA and ALOHA are wave coupling simulation tools for LH antennas. Both codes are able to account for realistic 3D antenna geometries and use a 1D plasma model. In the framework of a collaboration between MIT and CEA laboratories, the two codes have been extensively compared. In TOPLHA the EM problem is self consistently formulated by means of a set of multiple coupled integral equations having as domain the triangles of the meshed antenna surface. TOPLHA currently uses the FELHS code for modeling the plasma response. ALOHA instead uses a mode matching approach and its own plasma model. Comparisons have been done for several plasma scenarios on different antenna designs: an array of independent waveguides, a multi-junction antenna and a passive/active multi-junction antenna. When simulating the same geometry and plasma conditions the two codes compare remarkably well both for the reflection coefficients and for the launched spectra. The different approach of the two codes to solve the same problem strengthens the confidence in the final results.

  14. ASTEC—the Aarhus STellar Evolution Code

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, Jørgen

    2008-08-01

    The Aarhus code is the result of a long development, starting in 1974, and still ongoing. A novel feature is the integration of the computation of adiabatic oscillations for specified models as part of the code. It offers substantial flexibility in terms of microphysics and has been carefully tested for the computation of solar models. However, considerable development is still required in the treatment of nuclear reactions, diffusion and convective mixing.

  15. A Continuum Diffusion Model for Viscoelastic Materials

    DTIC Science & Technology

    1988-11-01

    ZIP Code) 7b. ADDRESS (CJI. Slow, and ZIP Code) Mechanics Div isi on Office of Naval Research; Code 432 Collge Satio, T as 7843800 Quincy Ave. Collge ...these studies, which involved experimental, analytical, and materials science aspects, were conducted by researchers in the fields of physical and...thermodynamics, with irreversibility stemming from the foregoing variables yr through "growth laws" that correspond to viscous resistance. The physical ageing of

  16. Structural and functional studies of a family of Dictyostelium discoideum developmentally regulated, prestalk genes coding for small proteins.

    PubMed

    Vicente, Juan J; Galardi-Castilla, María; Escalante, Ricardo; Sastre, Leandro

    2008-01-03

    The social amoeba Dictyostelium discoideum executes a multicellular development program upon starvation. This morphogenetic process requires the differential regulation of a large number of genes and is coordinated by extracellular signals. The MADS-box transcription factor SrfA is required for several stages of development, including slug migration and spore terminal differentiation. Subtractive hybridization allowed the isolation of a gene, sigN (SrfA-induced gene N), that was dependent on the transcription factor SrfA for expression at the slug stage of development. Homology searches detected the existence of a large family of sigN-related genes in the Dictyostelium discoideum genome. The 13 most similar genes are grouped in two regions of chromosome 2 and have been named Group1 and Group2 sigN genes. The putative encoded proteins are 87-89 amino acids long. All these genes have a similar structure, composed of a first exon containing a 13 nucleotides long open reading frame and a second exon comprising the remaining of the putative coding region. The expression of these genes is induced at10 hours of development. Analyses of their promoter regions indicate that these genes are expressed in the prestalk region of developing structures. The addition of antibodies raised against SigN Group 2 proteins induced disintegration of multi-cellular structures at the mound stage of development. A large family of genes coding for small proteins has been identified in D. discoideum. Two groups of very similar genes from this family have been shown to be specifically expressed in prestalk cells during development. Functional studies using antibodies raised against Group 2 SigN proteins indicate that these genes could play a role during multicellular development.

  17. Emerging from the tragedies in Bangladesh: a challenge to voluntarism in the global economy.

    PubMed

    Claeson, Björn Skorpen

    2015-02-01

    Under the regime of private company or multi-stakeholder voluntary codes of conduct and industry social auditing, workers have absorbed low wages and unsafe and abusive conditions; labor leaders and union members have become the targets of both government and factory harassment and violence; and trade union power has waned. Nowhere have these private systems of codes and audits so clearly failed to protect workers as in Bangladesh's apparel industry. However, international labor groups and Bangladeshi unions have succeeded in mounting a challenge to voluntarism in the global economy, persuading more than 180 companies to make a binding and enforceable commitment to workers' safety in an agreement with 12 unions. The extent to which this Bangladesh Accord will be able to influence the entrenched global regime of voluntary codes and weak trade unions remains an open question. But if the Accord can make progress in Bangladesh, it can help to inspire similar efforts in other countries and in other industries. © 2015 SAGE Publications.

  18. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P$sub 1$) in up to three- dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First- order perturbation analysis capability is available at the macroscopic cross section level. (auth)

  19. Topological quantum error correction in the Kitaev honeycomb model

    NASA Astrophysics Data System (ADS)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  20. Finite element methods in a simulation code for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Kurz, Wolfgang

    1994-06-01

    Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).

  1. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE PAGES

    Holland, Troy; Fletcher, Thomas H.

    2017-02-22

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  2. Comprehensive Model of Single Particle Pulverized Coal Combustion Extended to Oxy-Coal Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Fletcher, Thomas H.

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive CFD simulations are valuable tools in evaluating and deploying oxy-fuel and other carbon capture technologies either as retrofit technologies or for new construction. But, accurate predictive simulations require physically realistic submodels with low computational requirements. In particular, comprehensive char oxidation and gasification models have been developed that describe multiple reaction and diffusion processes. Our work extends a comprehensive char conversion code (CCK), which treats surface oxidation and gasification reactions as well as processes such as film diffusion, pore diffusion, ash encapsulation, and annealing. In this work several submodels inmore » the CCK code were updated with more realistic physics or otherwise extended to function in oxy-coal conditions. Improved submodels include the annealing model, the swelling model, the mode of burning parameter, and the kinetic model, as well as the addition of the chemical percolation devolatilization (CPD) model. We compare our results of the char combustion model to oxy-coal data, and further compared to parallel data sets near conventional conditions. A potential method to apply the detailed code in CFD work is given.« less

  3. Blast and the Consequences on Traumatic Brain Injury-Multiscale Mechanical Modeling of Brain

    DTIC Science & Technology

    2011-02-17

    blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi- material fluid –structure interaction problem. The 3-D head...formulation is implemented to model the air-blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi-material fluid ...Biomechanics Study of Influencing Parameters for brain under Impact ............................... 12 5.1 The Impact of Cerebrospinal Fluid

  4. System, methods and apparatus for program optimization for multi-threaded processor architectures

    DOEpatents

    Bastoul, Cedric; Lethin, Richard A; Leung, Allen K; Meister, Benoit J; Szilagyi, Peter; Vasilache, Nicolas T; Wohlford, David E

    2015-01-06

    Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.

  5. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  6. Overview 2003 of NASA Multi-D Stirling Convertor Code Development and DOE and NASA Stirling Regenerator R and D Efforts

    NASA Technical Reports Server (NTRS)

    Tew, Roy; Ibrahim, Mounir; Simon, Terry; Mantell, Susan; Gedeon, David; Qiu, Songgang; Wood, Gary

    2004-01-01

    This paper win report on continuation through the third year of a NASA grant for multi-dimensional Stirling CFD code development and validation; continuation through the third and final year of a Department of Energy, Golden Field Office (DOE), regenerator research effort and a NASA grant for continuation of the effort through two additional years; and a new NASA Research Award for design, microfabrication and testing of a "Next Generation Stirling Engine Regenerator." Cleveland State University (CSU) is the lead organization for all three efforts, with the University of Minnesota (UMN) and Gedeon Associates as subcontractors. The Stirling Technology Company and Sun power, Inc. acted as unfunded consultants or participants through the third years of both the NASA multi-D code development and DOE regenerator research efforts; they win both be subcontractors on the new regenerator microfabrication contract.

  7. Development of Tokamak Transport Solvers for Stiff Confinement Systems

    NASA Astrophysics Data System (ADS)

    St. John, H. E.; Lao, L. L.; Murakami, M.; Park, J. M.

    2006-10-01

    Leading transport models such as GLF23 [1] and MM95 [2] describe turbulent plasma energy, momentum and particle flows. In order to accommodate existing transport codes and associated solution methods effective diffusivities have to be derived from these turbulent flow models. This can cause significant problems in predicting unique solutions. We have developed a parallel transport code solver, GCNMP, that can accommodate both flow based and diffusivity based confinement models by solving the discretized nonlinear equations using modern Newton, trust region, steepest descent and homotopy methods. We present our latest development efforts, including multiple dynamic grids, application of two-level parallel schemes, and operator splitting techniques that allow us to combine flow based and diffusivity based models in tokamk simulations. 6pt [1] R.E. Waltz, et al., Phys. Plasmas 4, 7 (1997). [2] G. Bateman, et al., Phys. Plasmas 5, 1793 (1998).

  8. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  9. Resolving the Multi-scale Behavior of Geochemical Weathering in the Critical Zone Using High Resolution Hydro-geochemical Models

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Rajaram, H.

    2015-12-01

    This work investigates hydrologic and geochemical interactions in the Critical Zone (CZ) using high-resolution reactive transport modeling. Reactive transport models can be used to predict the response of geochemical weathering and solute fluxes in the CZ to changes in a dynamic environment, such as those pertaining to human activities and climate change in recent years. The scales of hydrology and geochemistry in the CZ range from days to eons in time and centimeters to kilometers in space. Here, we present results of a multi-dimensional, multi-scale hydro-geochemical model to investigate the role of subsurface heterogeneity on the formation of mineral weathering fronts in the CZ, which requires consideration of many of these spatio-temporal scales. The model is implemented using the reactive transport code PFLOTRAN, an open source subsurface flow and reactive transport code that utilizes parallelization over multiple processing nodes and provides a strong framework for simulating weathering in the CZ. The model is set up to simulate weathering dynamics in the mountainous catchments representative of the Colorado Front Range. Model parameters were constrained based on hydrologic, geochemical, and geophysical observations from the Boulder Creek Critical Zone Observatory (BcCZO). Simulations were performed in fractured rock systems and compared with systems of heterogeneous and homogeneous permeability fields. Tracer simulations revealed that the mean residence time of solutes was drastically accelerated as fracture density increased. In simulations that include mineral reactions, distinct signatures of transport limitations on weathering arose when discrete flow paths were included. This transport limitation was related to both advective and diffusive processes in the highly heterogeneous systems (i.e. fractured media and correlated random permeability fields with σlnk > 3). The well-known time-dependence of mineral weathering rates was found to be the most pronounced in the fractured systems, with a departure from the maximum system-averaged dissolution rate occurring after ~100 kyr followed by a gradual decrease in the reaction rate with time that persists beyond 104 kyr.

  10. Reactive multi-particle collision dynamics with reactive boundary conditions

    NASA Astrophysics Data System (ADS)

    Sayyidmousavi, Alireza; Rohlf, Katrin

    2018-07-01

    In the present study, an off-lattice particle-based method called the reactive multi-particle collision (RMPC) dynamics is extended to model reaction-diffusion systems with reactive boundary conditions in which the a priori diffusion coefficient of the particles needs to be maintained throughout the simulation. To this end, the authors have made use of the so-called bath particles whose purpose is only to ensure proper diffusion of the main particles in the system. In order to model partial adsorption by a reactive boundary in the RMPC, the probability of a particle being adsorbed, once it hits the boundary, is calculated by drawing an analogy between the RMPC and Brownian Dynamics. The main advantages of the RMPC compared to other molecular based methods are less computational cost as well as conservation of mass, energy and momentum in the collision and free streaming steps. The proposed approach is tested on three reaction-diffusion systems and very good agreement with the solutions to their corresponding partial differential equations is observed.

  11. Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics.

    PubMed

    Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone

    2016-10-05

    Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics.

  12. Pre-eruptive magmatic processes re-timed using a non-isothermal approach to magma chamber dynamics

    PubMed Central

    Petrone, Chiara Maria; Bugatti, Giuseppe; Braschi, Eleonora; Tommasini, Simone

    2016-01-01

    Constraining the timescales of pre-eruptive magmatic processes in active volcanic systems is paramount to understand magma chamber dynamics and the triggers for volcanic eruptions. Temporal information of magmatic processes is locked within the chemical zoning profiles of crystals but can be accessed by means of elemental diffusion chronometry. Mineral compositional zoning testifies to the occurrence of substantial temperature differences within magma chambers, which often bias the estimated timescales in the case of multi-stage zoned minerals. Here we propose a new Non-Isothermal Diffusion Incremental Step model to take into account the non-isothermal nature of pre-eruptive processes, deconstructing the main core-rim diffusion profiles of multi-zoned crystals into different isothermal steps. The Non-Isothermal Diffusion Incremental Step model represents a significant improvement in the reconstruction of crystal lifetime histories. Unravelling stepwise timescales at contrasting temperatures provides a novel approach to constraining pre-eruptive magmatic processes and greatly increases our understanding of magma chamber dynamics. PMID:27703141

  13. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  14. Multi-charge-state molecular dynamics and self-diffusion coefficient in the warm dense matter regime

    NASA Astrophysics Data System (ADS)

    Fu, Yongsheng; Hou, Yong; Kang, Dongdong; Gao, Cheng; Jin, Fengtao; Yuan, Jianmin

    2018-01-01

    We present a multi-ion molecular dynamics (MIMD) simulation and apply it to calculating the self-diffusion coefficients of ions with different charge-states in the warm dense matter (WDM) regime. First, the method is used for the self-consistent calculation of electron structures of different charge-state ions in the ion sphere, with the ion-sphere radii being determined by the plasma density and the ion charges. The ionic fraction is then obtained by solving the Saha equation, taking account of interactions among different charge-state ions in the system, and ion-ion pair potentials are computed using the modified Gordon-Kim method in the framework of temperature-dependent density functional theory on the basis of the electron structures. Finally, MIMD is used to calculate ionic self-diffusion coefficients from the velocity correlation function according to the Green-Kubo relation. A comparison with the results of the average-atom model shows that different statistical processes will influence the ionic diffusion coefficient in the WDM regime.

  15. Modelling mass diffusion for a multi-layer sphere immersed in a semi-infinite medium: application to drug delivery.

    PubMed

    Carr, Elliot J; Pontrelli, Giuseppe

    2018-04-12

    We present a general mechanistic model of mass diffusion for a composite sphere placed in a large ambient medium. The multi-layer problem is described by a system of diffusion equations coupled via interlayer boundary conditions such as those imposing a finite mass resistance at the external surface of the sphere. While the work is applicable to the generic problem of heat or mass transfer in a multi-layer sphere, the analysis and results are presented in the context of drug kinetics for desorbing and absorbing spherical microcapsules. We derive an analytical solution for the concentration in the sphere and in the surrounding medium that avoids any artificial truncation at a finite distance. The closed-form solution in each concentric layer is expressed in terms of a suitably-defined inverse Laplace transform that can be evaluated numerically. Concentration profiles and drug mass curves in the spherical layers and in the external environment are presented and the dependency of the solution on the mass transfer coefficient at the surface of the sphere analyzed. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    NASA Astrophysics Data System (ADS)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  17. Assessment of the MHD capability in the ATHENA code using data from the ALEX facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roth, P.A.

    1989-03-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code is a system transient analysis code with multi-loop, multi-fluid capabilities, which is available to the fusion community at the National Magnetic Fusion Energy Computing Center (NMFECC). The work reported here assesses the ATHENA magnetohydrodynamic (MHD) pressure drop model for liquid metals flowing through a strong magnetic field. An ATHENA model was developed for two simple geometry, adiabatic test sections used in the Argonne Liquid Metal Experiment (ALEX) at Argonne National Laboratory (ANL). The pressure drops calculated by ATHENA agreed well with the experimental results from the ALEX facility.

  18. Algorithm for loading shot noise microbunching in multi-dimensional, free-electron laser simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.

  19. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  20. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  1. Compact configurations within small evolving groups of galaxies

    NASA Astrophysics Data System (ADS)

    Mamon, G. A.

    Small virialized groups of galaxies are evolved with a gravitational N-body code, where the galaxies and a diffuse background are treated as single particles, but with mass and luminosity profiles attached, which enbles the estimation of parameters such as internal energies, half-mass radii, and the softened potential energies of interaction. The numerical treatment includes mergers, collisional stripping, tidal limitation by the mean-field of the background (evaluated using a combination of instantaneous and impulsive formulations), galaxy heating from collisons, and background heating from dynamical friction. The groups start out either as dense as appear the groups in Hickson's (1982) catalog, or as loose as appear those in Turner and Gott's (1976a) catalog, and they are simulated many times (usually 20) with different initial positions and velocities. Dense groups of galaxies with massive dark haloes coalesce into a single galaxy and lose their compact group appearance in approximately 3 group half-mass crossing times, while dense groups of galaxies without massive haloes survive the merger instability for 15 half-mass crossing times (in a more massive background to keep the same total group mass).

  2. Advanced NDE research in electromagnetic, thermal, and coherent optics

    NASA Technical Reports Server (NTRS)

    Skinner, S. Ballou

    1992-01-01

    A new inspection technology called magneto-optic/eddy current imaging was investigated. The magneto-optic imager makes readily visible irregularities and inconsistencies in airframe components. Other research observed in electromagnetics included (1) disbond detection via resonant modal analysis; (2) AC magnetic field frequency dependence of magnetoacoustic emission; and (3) multi-view magneto-optic imaging. Research observed in the thermal group included (1) thermographic detection and characterization of corrosion in aircraft aluminum; (2) a multipurpose infrared imaging system for thermoelastic stress detection; (3) thermal diffusivity imaging of stress induced damage in composites; and (4) detection and measurement of ice formation on the space shuttle main fuel tank. Research observed in the optics group included advancements in optical nondestructive evaluation (NDE).

  3. Whither Risk Assessment: New Challenges and Opportunities a Third of a Century After the Red Book.

    PubMed

    Greenberg, Michael; Goldstein, Bernard D; Anderson, Elizabeth; Dourson, Michael; Landis, Wayne; North, D Warner

    2015-11-01

    Six multi-decade-long members of SRA reflect on the 1983 Red Book in order to examine the evolving relationship between risk assessment and risk management; the diffusion of risk assessment practice to risk areas such as homeland security and transportation; the quality of chemical risk databases; challenges from other groups to elements at the core of risk assessment practice; and our collective efforts to communicate risk assessment to a diverse set of critical groups that do not understand risk, risk assessment, or many other risk-related issues. The authors reflect on the 10 recommendations in the Red Book and present several pressing challenges for risk assessment practitioners. © 2015 Society for Risk Analysis.

  4. [Joint correction for motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging].

    PubMed

    Wu, Wenchuan; Fang, Sheng; Guo, Hua

    2014-06-01

    Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.

  5. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    NASA Astrophysics Data System (ADS)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.

  6. High-Fidelity Thermal Radiation Models and Measurements for High-Pressure Reacting Laminar and Turbulent Flows

    DTIC Science & Technology

    2013-06-26

    flow code used ( OpenFOAM ) to include differential diffusion and cell-based stochastic RTE solvers. The models were validated by simulation of laminar...wavenumber selection is improved about by a factor of 10. (5) OpenFOAM Improvements for Laminar Flames A laminar-diffusion combustion solver, taking into...account the effects of differential diffusion, was developed within the open source CFD package OpenFOAM [18]. In addition, OpenFOAM was augmented to take

  7. Predicting multi-wall structural response to hypervelocity impact using the hull code

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.

    1993-01-01

    Previously, multi-wall structures have been analyzed extensively, primarily through experiment, as a means of increasing the meteoroid/space debris impact protection of spacecraft. As structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative to experimental testing, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under different impact loading conditions. The results of comparing experimental tests to Hull Hydrodynamic Computer Code predictions are reported. Also, the results of a numerical parametric study of multi-wall structural response to hypervelocity cylindrical projectile impact are presented.

  8. Plasma kinetic effects on atomistic mix in one dimension and at structured interfaces (I)

    NASA Astrophysics Data System (ADS)

    Yin, L.; Albright, B. J.; Vold, E. L.; Taitano, W.; Chacon, L.; Simakov, A.

    2017-10-01

    Kinetic effects on interfacial mix are examined using VPIC simulations. In 1D, comparisons are made to the results of analytic theory in the small Knudsen number limit. While the bulk mixing properties of interfaces are in general agreement, differences arise near the low-concentration fronts during the early evolution of a sharp interface when the species' perpendicular scattering rate dominates over the slowing down rate. In kinetic simulations, the diffusion velocities can be larger or comparable to the ion thermal speeds, and the Knudsen number can be large. Super-diffusive growth in mix widths (Δx ta where a >=1/2) is seen before transition to the slow diffusive process predicted from theory (a =1/2). Mixing at interfaces leads to persistent, bulk, hydrodynamic features in the center of mass flow profiles as a result of diffusion and momentum conservation. These conclusions are drawn from VPIC results together with simulations from the RAGE hydrodynamics code with an implementation of diffusion and viscosity from theory and an implicit Vlasov-Fokker-Planck code iFP. In perturbed 2D and 3D interfaces, it is found that 1D ambipolarity is still valid and that initial perturbations flatten out on a-few-ps time scale, implying that finite diffusivity and viscosity can slow instability growth in ICF and HED settings. Work supported by the LANL ASC and Science programs.

  9. A Loader for Executing Multi-Binary Applications on the Thinking Machines CM-5: It's Not Just for SPMD Anymore

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.

    1995-01-01

    The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.

  10. Influence of 4-week multi-strain probiotic administration on resting-state functional connectivity in healthy volunteers.

    PubMed

    Bagga, Deepika; Aigner, Christoph Stefan; Reichert, Johanna Louise; Cecchetto, Cinzia; Fischmeister, Florian Ph S; Holzer, Peter; Moissl-Eichinger, Christine; Schöpf, Veronika

    2018-05-30

    Experimental investigations in rodents have contributed significantly to our current understanding of the potential importance of the gut microbiome and brain interactions for neurotransmitter expression, neurodevelopment, and behaviour. However, clinical evidence to support such interactions is still scarce. The present study used a double-blind, randomized, pre- and post-intervention assessment design to investigate the effects of a 4-week multi-strain probiotic administration on whole-brain functional and structural connectivity in healthy volunteers. Forty-five healthy volunteers were recruited for this study and were divided equally into three groups (PRP: probiotic, PLP: placebo, and CON: control). All the participants underwent resting-state functional MRI and diffusion MRI brain scans twice during the course of study, at the beginning (time point 1) and after 4 weeks (time point 2). MRI data were acquired using a 3T whole-body MR system (Magnetom Skyra, Siemens, Germany). Functional connectivity (FC) changes were observed in the default mode network (DMN), salience network (SN), and middle and superior frontal gyrus network (MFGN) in the PRP group as compared to the PLP and CON groups. PRP group showed a significant decrease in FC in MFGN (in frontal pole and frontal medial cortex) and in DMN (in frontal lobe) as compared to CON and PLP groups, respectively. Further, significant increase in FC in SN (in cingulate gyrus and precuneus cortex) was also observed in PRP group as compared to CON group. The significance threshold was set to p < 0.05 FWE corrected. No significant structural differences were observed between the three groups. This work provides new insights into the role of a multi-strain probiotic administration in modulating the behaviour, which is reflected as changes in the FC in healthy volunteers. This study motivates future investigations into the role of probiotics in context of major depression and stress disorders.

  11. Experiments with a Supersonic Multi-Channel Radial Diffuser.

    DTIC Science & Technology

    1980-09-01

    unlimited. 17 . DISTRIBUTION STATEMENT (o the *bsta~c entered nRItok 20, it dffttt Iton, Report) IS. SUPPLEMENTARY NOTES 19. KEY WORDS (Continue o...Improvements 17 VI SIGNIFICANT TEST RESULTS 20 1. General Considerations 20 2. Typical Radial Diffuser Performance 20 3. Flow Stability Experiments 22 VIII...Adjustments Indicated 39 16 Comparison of the Single Channel Performances for Two Extreme Channel Geometries 40 17 Typical Radial Diffuser Performance

  12. Computations of the three-dimensional flow and heat transfer within a coolant passage of a radial turbine blade

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.

    1991-01-01

    A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.

  13. Computations of spray, fuel-air mixing, and combustion in a lean-premixed-prevaporized combustor

    NASA Technical Reports Server (NTRS)

    Dasgupta, A.; Li, Z.; Shih, T. I.-P.; Kundu, K.; Deur, J. M.

    1993-01-01

    A code was developed for computing the multidimensional flow, spray, combustion, and pollutant formation inside gas turbine combustors. The code developed is based on a Lagrangian-Eulerian formulation and utilizes an implicit finite-volume method. The focus of this paper is on the spray part of the code (both formulation and algorithm), and a number of issues related to the computation of sprays and fuel-air mixing in a lean-premixed-prevaporized combustor. The issues addressed include: (1) how grid spacings affect the diffusion of evaporated fuel, and (2) how spurious modes can arise through modelling of the spray in the Lagrangian computations. An upwind interpolation scheme is proposed to account for some effects of grid spacing on the artificial diffusion of the evaporated fuel. Also, some guidelines are presented to minimize errors associated with the spurious modes.

  14. NOR-USA Scientific Traverse of East Antarctica: Science and Logistics on a Three-Month Expedition Across Antarctica's Farthest Frontier

    NASA Technical Reports Server (NTRS)

    Albert, Mary R.

    2012-01-01

    Dr. Albert's current research is centered on transfer processes in porous media, including air-snow exchange in the Polar Regions and in soils in temperate areas. Her research includes field measurements, laboratory experiments, and theoretical modeling. Mary conducts field and laboratory measurements of the physical properties of natural terrain surfaces, including permeability, microstructure, and thermal conductivity. Mary uses the measurements to examine the processes of diffusion and advection of heat, mass, and chemical transport through snow and other porous media. She has developed numerical models for investigation of a variety of problems, from interstitial transport to freezing of flowing liquids. These models include a two-dimensional finite element code for air flow with heat, water vapor, and chemical transport in porous media, several multidimensional codes for diffusive transfer, as well as a computational fluid dynamics code for analysis of turbulent water flow in moving-boundary phase change problems.

  15. Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA

    NASA Astrophysics Data System (ADS)

    Messer, O. E. B.; Harris, J. A.; Hix, W. R.; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, A.

    2018-04-01

    Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport, and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.

  16. A geometrical multi-scale numerical method for coupled hygro-thermo-mechanical problems in photovoltaic laminates.

    PubMed

    Lenarda, P; Paggi, M

    A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.

  17. Numerical method for angle-of-incidence correction factors for diffuse radiation incident photovoltaic modules

    DOE PAGES

    Marion, Bill

    2017-03-27

    Here, a numerical method is provided for solving the integral equation for the angle-of-incidence (AOI) correction factor for diffuse radiation incident photovoltaic (PV) modules. The types of diffuse radiation considered include sky, circumsolar, horizon, and ground-reflected. The method permits PV module AOI characteristics to be addressed when calculating AOI losses associated with diffuse radiation. Pseudo code is provided to aid users in the implementation, and results are shown for PV modules with tilt angles from 0° to 90°. Diffuse AOI losses are greatest for small PV module tilt angles. Including AOI losses associated with the diffuse irradiance will improve predictionsmore » of PV system performance.« less

  18. A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.

  19. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1988-01-01

    During the period December 1, 1987 through May 31, 1988, progress was made in the following areas: construction of Multi-Dimensional Bandwidth Efficient Trellis Codes with MPSK modulation; performance analysis of Bandwidth Efficient Trellis Coded Modulation schemes; and performance analysis of Bandwidth Efficient Trellis Codes on Fading Channels.

  20. TOUGH+ v1.5 Core Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, George J.

    TOUGH+ v1.5 is a numerical code for the simulation of multi-phase, multi-component flow and transport of mass and heat through porous and fractured media, and represents the third update of the code since its first release [Moridis et al., 2008]. TOUGH+ is a successor to the TOUGH2 [Pruess et al., 1991; 2012] family of codes for multi-component, multiphase fluid and heat flow developed at the Lawrence Berkeley National Laboratory. It is written in standard FORTRAN 95/2003, and can be run on any computational platform (workstations, PC, Macintosh). TOUGH+ v1.5 employs dynamic memory allocation, thus minimizing storage requirements. It has amore » completely modular structure, follows the tenets of Object-Oriented Programming (OOP), and involves the advanced features of FORTRAN 95/2003, i.e., modules, derived data types, the use of pointers, lists and trees, data encapsulation, defined operators and assignments, operator extension and overloading, use of generic procedures, and maximum use of the powerful intrinsic vector and matrix processing operations. TOUGH+ v1.5 is the core code for its family of applications, i.e., the part of the code that is common to all its applications. It provides a description of the underlying physics and thermodynamics of non-isothermal flow, of the mathematical and numerical approaches, as well as a detailed explanation of the general (common to all applications) input requirements, options, capabilities and output specifications. The core code cannot run by itself: it needs to be coupled with the code for the specific TOUGH+ application option that describes a particular type of problem. The additional input requirements specific to a particular TOUGH+ application options and related illustrative examples can be found in the corresponding User's Manual.« less

  1. Site Map | USDA Plant Hardiness Zone Map

    Science.gov Websites

    Acknowledgments & Citation Copyright Map & Data Downloads Map Downloads Geography (GIS) Downloads Multi ; Citation Copyright Map & Data Downloads Map Downloads Geography (GIS) Downloads Multi-ZIP Code Finder

  2. Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid

    2012-01-01

    In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.

  3. Automated frequency analysis of synchronous and diffuse sleep spindles.

    PubMed

    Huupponen, Eero; Saastamoinen, Antti; Niemi, Jukka; Virkkala, Jussi; Hasan, Joel; Värri, Alpo; Himanen, Sari-Leena

    2005-01-01

    Sleep spindles have different properties in different localizations in the cortex. First main objective was to develop an amplitude-independent multi-channel spindle detection method. Secondly the method was applied to study the anteroposterior frequency differences of pure synchronous (visible bilaterally, either frontopolarly or centrally) and diffuse (visible bilaterally both frontopolarly and centrally) sleep spindles. A previously presented spindle detector based on the fuzzy reasoning principle and a level detector were combined to form a multi-channel spindle detector. The spindle detector had a 76.17% true positive rate and 0.93% false-positive rate. Pure central spindles were faster and pure frontal spindles were slower than diffuse spindles measured simultaneously from both locations. The study of frequency relations of spindles might give new information about thalamocortical sleep spindle generating mechanisms. Copyright (c) 2005 S. Karger AG, Basel.

  4. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    PubMed

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Multi-subject Manifold Alignment of Functional Network Structures via Joint Diagonalization.

    PubMed

    Nenning, Karl-Heinz; Kollndorfer, Kathrin; Schöpf, Veronika; Prayer, Daniela; Langs, Georg

    2015-01-01

    Functional magnetic resonance imaging group studies rely on the ability to establish correspondence across individuals. This enables location specific comparison of functional brain characteristics. Registration is often based on morphology and does not take variability of functional localization into account. This can lead to a loss of specificity, or confounds when studying diseases. In this paper we propose multi-subject functional registration by manifold alignment via coupled joint diagonalization. The functional network structure of each subject is encoded in a diffusion map, where functional relationships are decoupled from spatial position. Two-step manifold alignment estimates initial correspondences between functionally equivalent regions. Then, coupled joint diagonalization establishes common eigenbases across all individuals, and refines the functional correspondences. We evaluate our approach on fMRI data acquired during a language paradigm. Experiments demonstrate the benefits in matching accuracy achieved by coupled joint diagonalization compared to previously proposed functional alignment approaches, or alignment based on structural correspondences.

  6. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    NASA Astrophysics Data System (ADS)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  7. An Extended Lagrangian Method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1995-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method,' is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. The present method and the Arbitrary Lagrangian-Eulerian (ALE) method have a similarity in spirit-eliminating the cross-streamline numerical diffusion. For this purpose, we suggest a simple grid constraint condition and utilize an accurate discretization procedure. This grid constraint is only applied to the transverse cell face parallel to the local stream velocity, and hence our method for the steady state problems naturally reduces to the streamline-curvature method, without explicitly solving the steady stream-coordinate equations formulated a priori. Unlike the Lagrangian method proposed by Loh and Hui which is valid only for steady supersonic flows, the present method is general and capable of treating subsonic flows and supersonic flows as well as unsteady flows, simply by invoking in the same code an appropriate grid constraint suggested in this paper. The approach is found to be robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  8. Effects of a wavy neutral sheet on cosmic ray anisotropies

    NASA Technical Reports Server (NTRS)

    Kota, J.; Jokipii, J. R.

    1985-01-01

    The first results of a three-dimensional numerical code calculating cosmic ray anisotropies is presented. The code includes diffusion, convection, adiabatic cooling, and drift in an interplanetary magnetic field model containing a wavy neutral sheet. The 3-D model can reproduce all the principal observations for a reasonable set of parameters.

  9. Reaction-diffusion systems in natural sciences and new technology transfer

    NASA Astrophysics Data System (ADS)

    Keller, André A.

    2012-12-01

    Diffusion mechanisms in natural sciences and innovation management involve partial differential equations (PDEs). This is due to their spatio-temporal dimensions. Functional semi-discretized PDEs (with lattice spatial structures or time delays) may be even more adapted to real world problems. In the modeling process, PDEs can also formalize behaviors, such as the logistic growth of populations with migration, and the adopters’ dynamics of new products in innovation models. In biology, these events are related to variations in the environment, population densities and overcrowding, migration and spreading of humans, animals, plants and other cells and organisms. In chemical reactions, molecules of different species interact locally and diffuse. In the management of new technologies, the diffusion processes of innovations in the marketplace (e.g., the mobile phone) are a major subject. These innovation diffusion models refer mainly to epidemic models. This contribution introduces that modeling process by using PDEs and reviews the essential features of the dynamics and control in biological, chemical and new technology transfer. This paper is essentially user-oriented with basic nonlinear evolution equations, delay PDEs, several analytical and numerical methods for solving, different solutions, and with the use of mathematical packages, notebooks and codes. The computations are carried out by using the software Wolfram Mathematica®7, and C++ codes.

  10. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  11. Unstable vicinal crystal growth from cellular automata

    NASA Astrophysics Data System (ADS)

    Krasteva, A.; Popova, H.; KrzyŻewski, F.; Załuska-Kotur, M.; Tonchev, V.

    2016-03-01

    In order to study the unstable step motion on vicinal crystal surfaces we devise vicinal Cellular Automata. Each cell from the colony has value equal to its height in the vicinal, initially the steps are regularly distributed. Another array keeps the adatoms, initially distributed randomly over the surface. The growth rule defines that each adatom at right nearest neighbor position to a (multi-) step attaches to it. The update of whole colony is performed at once and then time increases. This execution of the growth rule is followed by compensation of the consumed particles and by diffusional update(s) of the adatom population. Two principal sources of instability are employed - biased diffusion and infinite inverse Ehrlich-Schwoebel barrier (iiSE). Since these factors are not opposed by step-step repulsion the formation of multi-steps is observed but in general the step bunches preserve a finite width. We monitor the developing surface patterns and quantify the observations by scaling laws with focus on the eventual transition from diffusion-limited to kinetics-limited phenomenon. The time-scaling exponent of the bunch size N is 1/2 for the case of biased diffusion and 1/3 for the case of iiSE. Additional distinction is possible based on the time-scaling exponents of the sizes of multi-step Nmulti, these are 0.36÷0.4 (for biased diffusion) and 1/4 (iiSE).

  12. Snow Micro-Structure Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micah Johnson, Andrew Slaughter

    PIKA is a MOOSE-based application for modeling micro-structure evolution of seasonal snow. The model will be useful for environmental, atmospheric, and climate scientists. Possible applications include application to energy balance models, ice sheet modeling, and avalanche forecasting. The model implements physics from published, peer-reviewed articles. The main purpose is to foster university and laboratory collaboration to build a larger multi-scale snow model using MOOSE. The main feature of the code is that it is implemented using the MOOSE framework, thus making features such as multiphysics coupling, adaptive mesh refinement, and parallel scalability native to the application. PIKA implements three equations:more » the phase-field equation for tracking the evolution of the ice-air interface within seasonal snow at the grain-scale; the heat equation for computing the temperature of both the ice and air within the snow; and the mass transport equation for monitoring the diffusion of water vapor in the pore space of the snow.« less

  13. DSMC Studies of the Richtmyer-Meshkov Instability

    NASA Astrophysics Data System (ADS)

    Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.

    2014-11-01

    A new exascale-capable Direct Simulation Monte Carlo (DSMC) code, SPARTA, developed to be highly efficient on massively parallel computers, has extended the applicability of DSMC to challenging, transient three-dimensional problems in the continuum regime. Because DSMC inherently accounts for compressibility, viscosity, and diffusivity, it has the potential to improve the understanding of the mechanisms responsible for hydrodynamic instabilities. Here, the Richtmyer-Meshkov instability at the interface between two gases was studied parametrically using SPARTA. Simulations performed on Sequoia, an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory, are used to investigate various Atwood numbers (0.33-0.94) and Mach numbers (1.2-12.0) for two-dimensional and three-dimensional perturbations. Comparisons with theoretical predictions demonstrate that DSMC accurately predicts the early-time growth of the instability. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. GIS Data Downloads | USDA Plant Hardiness Zone Map

    Science.gov Websites

    Acknowledgments & Citation Copyright Map & Data Downloads Map Downloads Geography (GIS) Downloads Multi & Data Downloads / GIS Data Downloads Topics Map Downloads Geography (GIS) Downloads Multi-Zip Code

  15. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    PubMed Central

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  16. A review of chemical methods for the selective sulfation and desulfation of polysaccharides.

    PubMed

    Bedini, Emiliano; Laezza, Antonio; Parrilli, Michelangelo; Iadonisi, Alfonso

    2017-10-15

    Sulfated polysaccharides are known to possess several biological activities, with their sulfation pattern acting as a code able to transmit functional information. Due to their high biological and biomedical importance, in the last two decades many reports on the chemical modification of their sulfate distribution as well as on the regioselective insertion of sulfate groups on non-sulfated polysaccharides appeared in literature. In this Review we have for the first time collected these reports together, categorizing them into three different classes: i) regioselective sulfation reactions, ii) regioselective desulfation reactions, iii) regioselective insertion of sulfate groups through multi-step strategies, and discussing their scope and limitations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Real-time data compression of broadcast video signals

    NASA Technical Reports Server (NTRS)

    Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1991-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  18. A multi-commuted flow injection system with a multi-channel propulsion unit placed before detection: Spectrophotometric determination of ammonium.

    PubMed

    Oliveira, Sara M; Lopes, Teresa I M S; Tóth, Ildikó V; Rangel, António O S S

    2007-09-26

    A flow system with a multi-channel peristaltic pump placed before the solenoid valves is proposed to overcome some limitations attributed to multi-commuted flow injection systems: the negative pressure can lead to the formation of unwanted air bubbles and limits the use of devices for separation processes (gas diffusion, dialysis or ion-exchange). The proposed approach was applied to the colorimetric determination of ammonium nitrogen. In alkaline medium, ammonium is converted into ammonia, which diffuses over the membrane, causing a pH change and subsequently a colour change in the acceptor stream (bromothymol blue solution). The system allowed the re-circulation of the acceptor solution and was applied to ammonium determination in surface and tap water, providing relative standard deviations lower than 1.5%. A stopped flow approach in the acceptor stream was adopted to attain a low quantification limit (42 microgL(-1)) and a linear dynamic range of 50-1000 microgL(-1) with a determination rate of 20 h(-1).

  19. Telemetering and telecommunications research

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1991-01-01

    The research center activities during the reporting period have been focused in three areas: (1) developing the necessary equipment and test procedures to support the testing of 8PSK-TCM through TDRSS from the WSGT; (2) extending the theoretical decoder work to higher speeds with a design goal of 600MBPS at 2 bits/Hz; and (3) completing the initial phase of the CPFSK Multi-H research and determining what subsets (if any) of these coding schemes are useful in the TDRSS environment. The equipment for the WSGT TCM testing has been completed and is functioning in the lab at NMSU. Measured results to date indicate that the uncoded system with the modified HRD and NMSU symbol sync operates at 1 to 1.5 dB from theory when processing encoded 8PSK. The NMSU pragmatic decoder when combined with these units produces approximately 2.9 dB of coding gain at 10(exp -5) BER. Our study of CPFSK with Multi-H coding has reached a critical stage. The principal conclusions reached in this activity are: (1) no scheme using Multi-H alone investigated by us or found in the literature produces power/bandwidth trades that are as good as TCM with filtered 8PSK; (2) when Multi-H is combined with convolutional coding, one can obtain better coding gain than with Multi-H alone but still no better power/bandwidth performance than TCM and these gains are available only with complex receivers; (3) the only advantage we can find for the CPFSK schemes over filtered MPSK with TCM is that they are constant envelope (however, constant envelope is of no benefit in a multiple access channel and of questionable benefit in a single access channel since driving the TWT to saturation in this situation is generally acceptable); and (4) based upon these results the center's research program will focus on concluding the existing CPFSK studies.

  20. 10 CFR 434.512 - Internal loads.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE... Proposed Design or for calculation of Design Energy Cost. 512.2Internal loads for multi-family high-rise residential buildings are prescribed in Tables 512.2.a and b, Multi-Family High Rise Residential Building...

  1. Investigation of the abnormal Zn diffusion phenomenon in III-V compound semiconductors induced by the surface self-diffusion of matrix atoms

    NASA Astrophysics Data System (ADS)

    Tang, Liangliang; Xu, Chang; Liu, Zhuming

    2017-01-01

    Zn diffusion in III-V compound semiconductorsare commonly processed under group V-atoms rich conditions because the vapor pressure of group V-atoms is relatively high. In this paper, we found that group V-atoms in the diffusion sources would not change the shaped of Zn profiles, while the Zn diffusion would change dramatically undergroup III-atoms rich conditions. The Zn diffusions were investigated in typical III-V semiconductors: GaAs, GaSb and InAs. We found that under group V-atoms rich or pure Zn conditions, the double-hump Zn profiles would be formed in all materials except InAs. While under group III-atoms rich conditions, single-hump Zn profiles would be formed in all materials. Detailed diffusion models were established to explain the Zn diffusion process; the surface self-diffusion of matrix atoms is the origin of the abnormal Zn diffusion phenomenon.

  2. Testing a one-dimensional prescription of dynamical shear mixing with a two-dimensional hydrodynamic simulation

    NASA Astrophysics Data System (ADS)

    Edelmann, P. V. F.; Röpke, F. K.; Hirschi, R.; Georgy, C.; Jones, S.

    2017-07-01

    Context. The treatment of mixing processes is still one of the major uncertainties in 1D stellar evolution models. This is mostly due to the need to parametrize and approximate aspects of hydrodynamics in hydrostatic codes. In particular, the effect of hydrodynamic instabilities in rotating stars, for example, dynamical shear instability, evades consistent description. Aims: We intend to study the accuracy of the diffusion approximation to dynamical shear in hydrostatic stellar evolution models by comparing 1D models to a first-principle hydrodynamics simulation starting from the same initial conditions. Methods: We chose an initial model calculated with the stellar evolution code GENEC that is just at the onset of a dynamical shear instability but does not show any other instabilities (e.g., convection). This was mapped to the hydrodynamics code SLH to perform a 2D simulation in the equatorial plane. We compare the resulting profiles in the two codes and compute an effective diffusion coefficient for the hydro simulation. Results: Shear instabilities develop in the 2D simulation in the regions predicted by linear theory to become unstable in the 1D stellar evolution model. Angular velocity and chemical composition is redistributed in the unstable region, thereby creating new unstable regions. After a period of time, the system settles in a symmetric, steady state, which is Richardson stable everywhere in the 2D simulation, whereas the instability remains for longer in the 1D model due to the limitations of the current implementation in the 1D code. A spatially resolved diffusion coefficient is extracted by comparing the initial and final profiles of mean atomic mass. Conclusions: The presented simulation gives a first insight on hydrodynamics of shear instabilities in a real stellar environment and even allows us to directly extract an effective diffusion coefficient. We see evidence for a critical Richardson number of 0.25 as regions above this threshold remain stable for the course of the simulation. The movie of the simulation is available at http://www.aanda.org

  3. A Neutronic Program for Critical and Nonequilibrium Study of Mobile Fuel Reactors: The Cinsf1D Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lecarpentier, David; Carpentier, Vincent

    2003-01-15

    Molten salt reactors (MSRs) have the distinction of having a liquid fuel that is also the coolant. The transport of delayed-neutron precursors by the fuel modifies the precursors' equation. As a consequence, it is necessary to adapt the methods currently used for solid fuel reactors to achieve critical or kinetics calculations for an MSR. A program is presented for which this adaptation has been carried out within the framework of the two-energy-group diffusion theory with one dimension of space. This program has been called Cinsf1D (Cinetique pour reacteur a sels fondus 1D)

  4. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  5. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE PAGES

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...

    2016-01-26

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  6. Selection and Evaluation of a Real Time Monitoring System for the Bigeye Bomb Fill/Close Production Facility. Phase 2

    DTIC Science & Technology

    1989-06-01

    and ZIP Code ) 10 SOURCE OF FUNDING NU MBERS I O KUI PROGRAM PRO ECCT TASKWOKUI E L E M E N T N O . N O .I 1 2 0 N O A 5 A C C E S S I O N N OlI I1 TITLE... source of by-products formation. Generating Data for Mathematical Modeling of Real Vapor Phase Reaction Systems (tremendously speeds multi -level, multi ...SMCC-RI1 6c AD RS(Ciry,. State, and ZIP Code ) SCRRI 7b. ADDRESS (City, State, and ZIP Code ) IA!hrueýýt Proving Ground, MD 21010-54213 a.NMOFFUNI.DNG

  7. Recent advances in multiview distributed video coding

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  8. Indications for spine surgery: validation of an administrative coding algorithm to classify degenerative diagnoses

    PubMed Central

    Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.

    2014-01-01

    Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90% of spine patients into their respective SPORT cohorts. Therefore, claims data appears to be a reasonably valid approach to classifying patients by surgical indication. PMID:24525995

  9. Multi-level Expression Design Language: Requirement level (MEDL-R) system evaluation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    An evaluation of the Multi-Level Expression Design Language Requirements Level (MEDL-R) system was conducted to determine whether it would be of use in the Goddard Space Flight Center Code 580 software development environment. The evaluation is based upon a study of the MEDL-R concept of requirement languages, the functions performed by MEDL-R, and the MEDL-R language syntax. Recommendations are made for changes to MEDL-R that would make it useful in the Code 580 environment.

  10. Diversity-optimal power loading for intensity modulated MIMO optical wireless communications.

    PubMed

    Zhang, Yan-Yu; Yu, Hong-Yi; Zhang, Jian-Kang; Zhu, Yi-Jun

    2016-04-18

    In this paper, we consider the design of space code for an intensity modulated direct detection multi-input-multi-output optical wireless communication (IM/DD MIMO-OWC) system, in which channel coefficients are independent and non-identically log-normal distributed, with variances and means known at the transmitter and channel state information available at the receiver. Utilizing the existing space code design criterion for IM/DD MIMO-OWC with a maximum likelihood (ML) detector, we design a diversity-optimal space code (DOSC) that maximizes both large-scale diversity and small-scale diversity gains and prove that the spatial repetition code (RC) with a diversity-optimized power allocation is diversity-optimal among all the high dimensional nonnegative space code schemes under a commonly used optical power constraint. In addition, we show that one of significant advantages of the DOSC is to allow low-complexity ML detection. Simulation results indicate that in high signal-to-noise ratio (SNR) regimes, our proposed DOSC significantly outperforms RC, which is the best space code currently available for such system.

  11. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  12. A grid generation system for multi-disciplinary design optimization

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Samareh-Abolhassani, Jamshid

    1995-01-01

    A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.

  13. Soft-information flipping approach in multi-head multi-track BPMR systems

    NASA Astrophysics Data System (ADS)

    Warisarn, C.; Busyatras, W.; Myint, L. M. M.

    2018-05-01

    Inter-track interference is one of the most severe impairments in bit-patterned media recording system. This impairment can be effectively handled by a modulation code and a multi-head array jointly processing multiple tracks; however, such a modulation constraint has never been utilized to improve the soft-information. Therefore, this paper proposes the utilization of modulation codes with an encoded constraint defined by the criteria for soft-information flipping during a three-track data detection process. Moreover, we also investigate the optimal offset position of readheads to provide the most improvement in system performance. The simulation results indicate that the proposed systems with and without position jitter are significantly superior to uncoded systems.

  14. Multi-dimensional simulations of core-collapse supernova explosions with CHIMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Harris, James Austin; Hix, William Raphael

    Unraveling the core-collapse supernova (CCSN) mechanism is a problem that remains essentially unsolved despite more than four decades of effort. Spherically symmetric models with otherwise high physical fidelity generally fail to produce explosions, and it is widely accepted that CCSNe are inherently multi-dimensional. Progress in realistic modeling has occurred recently through the availability of petascale platforms and the increasing sophistication of supernova codes. We will discuss our most recent work on understanding neutrino-driven CCSN explosions employing multi-dimensional neutrino-radiation hydrodynamics simulations with the Chimera code. We discuss the inputs and resulting outputs from these simulations, the role of neutrino radiation transport,more » and the importance of multi-dimensional fluid flows in shaping the explosions. We also highlight the production of 48Ca in long-running Chimera simulations.« less

  15. Characterization of a hybrid target multi-keV x-ray source by a multi-parameter statistical analysis of titanium K-shell emission

    DOE PAGES

    Primout, M.; Babonneau, D.; Jacquet, L.; ...

    2015-11-10

    We studied the titanium K-shell emission spectra from multi-keV x-ray source experiments with hybrid targets on the OMEGA laser facility. Using the collisional-radiative TRANSPEC code, dedicated to K-shell spectroscopy, we reproduced the main features of the detailed spectra measured with the time-resolved MSPEC spectrometer. We developed a general method to infer the N e, T e and T i characteristics of the target plasma from the spectral analysis (ratio of integrated Lyman-α to Helium-α in-band emission and the peak amplitude of individual line ratios) of the multi-keV x-ray emission. Finally, these thermodynamic conditions are compared to those calculated independently bymore » the radiation-hydrodynamics transport code FCI2.« less

  16. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    PubMed

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  17. Coding and transmission of subband coded images on the Internet

    NASA Astrophysics Data System (ADS)

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  18. Altered white matter microstructure in adolescent substance users.

    PubMed

    Bava, Sunita; Frank, Lawrence R; McQueeny, Tim; Schweinsburg, Brian C; Schweinsburg, Alecia D; Tapert, Susan F

    2009-09-30

    Chronic marijuana use during adolescence is frequently comorbid with heavy alcohol consumption and associated with CNS alterations, yet the influence of early cannabis and alcohol use on microstructural white matter integrity is unclear. Building on evidence that cannabinoid receptors are present in myelin precursors and affect glial cell processing, and that excessive ethanol exposure is associated with persistently impaired myelination, we used diffusion tensor imaging (DTI) to characterize white matter integrity in heavy substance using and non-using adolescents. We evaluated 36 marijuana and alcohol-using (MJ+ALC) adolescents (ages 16-19) and 36 demographically similar non-using controls with DTI. The diffusion parameters fractional anisotropy (FA) and mean diffusivity (MD) were subjected to whole-brain voxelwise group comparisons using tract-based spatial statistics (Smith, S.M., Jenkinson, M., Johansen-Berg, H., Rueckert, D., Nichols, T.E., Mackay, C.E., Watkins, K.E., Ciccarelli, O., Cader, M.Z., Matthews, P.M., Behrens, T.E., 2006. Tract-based spatial statistics: voxelwise analysis of multi-subject diffusion data. Neuroimage 31, 1487-1505). MJ+ALC teens had significantly lower FA than controls in 10 regions, including left superior longitudinal fasciculus (SLF), left postcentral gyrus, bilateral crus cerebri, and inferior frontal and temporal white matter tracts. These diminutions occurred in the context of increased FA in right occipital, internal capsule, and SLF regions. Changes in MD were less distributed, but increased MD was evident in the right occipital lobe, whereas the left inferior longitudinal fasciculus showed lower MD in MJ+ALC users. Findings suggest that fronto-parietal circuitry may be particularly impacted in adolescent users of the most prevalent intoxicants: marijuana and alcohol. Disruptions to white matter in this young group could indicate aberrant axonal and myelin maturation with resultant compromise of fiber integrity. Findings of increased anisotropic diffusion in alternate brain regions suggest possible neuroadaptive processes and can be examined in future studies of connectivity to determine how aberrancies in specific tracts might influence efficient cognitive processing.

  19. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  20. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  1. Diffusion Behavior of Mn and Si Between Liquid Oxide Inclusions and Solid Iron-Based Alloy at 1473 K

    NASA Astrophysics Data System (ADS)

    Kim, Sun-Joong; Tago, Hanae; Kim, Kyung-Ho; Kitamura, Shin-ya; Shibata, Hiroyuki

    2018-06-01

    In order to clarify the changes in the composition of oxide inclusions in steel, the effect of the metal and oxide composition on the reaction between solid Fe-based alloys and liquid multi-component oxides was investigated using the diffusion couple method at 1473 K. The measured concentration gradients of Mn and Si in the metal indicated that Mn diffused into the metal from the oxide, while the diffusion of Si occurred in the opposite direction. In addition, the MnO content in the oxide decreased with heat treatment time, while the SiO2 content increased. The compositional changes in both phases indicated that the Mn content in the metal near the interface increased with heat treatment with decreasing MnO content in the oxide. Assuming local equilibrium at the interface, the calculated [Mn]2/[Si] ratio at the interface in equilibrium with the oxide increased with increases in the MnO/SiO2 ratio in the oxide. The difference in the [Mn]2/[Si] ratios between the interface and the metal matrix increased, which caused the diffusion of Mn and Si between the multi-component oxide and metal. By measuring the diffusion lengths of Mn and Si in the metal, the chemical diffusion coefficients of Mn and Si were obtained to calculate the composition changes in Mn and Si in the metal. The calculated changes in Mn and Si in the metal agreed with the experimental results.

  2. Nonlinear dynamic simulation of single- and multi-spool core engines

    NASA Technical Reports Server (NTRS)

    Schobeiri, T.; Lippke, C.; Abouelkheir, M.

    1993-01-01

    In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.

  3. The role of intra-NAPL diffusion on mass transfer from MGP residuals

    NASA Astrophysics Data System (ADS)

    Shafieiyoun, Saeid; Thomson, Neil R.

    2018-06-01

    An experimental and computational study was performed to investigate the role of multi-component intra-NAPL diffusion on NAPL-water mass transfer. Molecular weight and the NAPL component concentrations were determined to be the most important parameters affecting intra-NAPL diffusion coefficients. Four NAPLs with different viscosities but the same quantified mass were simulated. For a spherical NAPL body, a combination of NAPL properties and interphase mass transfer rate can result in internal diffusion limitations. When the main intra-NAPL diffusion coefficients are in the range of self-diffusion coefficients (10-5 to 10-6 cm2/s), dissolution is not limited by internal diffusion except for high mass transfer rate coefficients (>180 cm/day). For a complex and relatively high viscous NAPL (>50 g/(cm s)), smaller intra-NAPL diffusion coefficients (<10-8) are expected and even low mass transfer rate coefficients ( 6 cm/day) can result in diffusion-limited dissolution.

  4. How thin barrier metal can be used to prevent Co diffusion in the modern integrated circuits?

    NASA Astrophysics Data System (ADS)

    Dixit, Hemant; Konar, Aniruddha; Pandey, Rajan; Ethirajan, Tamilmani

    2017-11-01

    In modern integrated circuits (ICs), billions of transistors are connected to each other via thin metal layers (e.g. copper, cobalt, etc) known as interconnects. At elevated process temperatures, inter-diffusion of atomic species can occur among these metal layers, causing sub-optimal performance of interconnects, which may lead to the failure of an IC. Thus, typically a thin barrier metal layer is used to prevent the inter-diffusion of atomic species within interconnects. For ICs with sub-10 nm transistors (10 nm technology node), the design rule (thickness scaling) demands the thinnest possible barrier layer. Therefore, here we investigate the critical thickness of a titanium-nitride (TiN) barrier that can prevent the cobalt diffusion using multi-scale modeling and simulations. First, we compute the Co diffusion barrier in crystalline and amorphous TiN with the nudged elastic band method within first-principles density functional theory simulations. Later, using the calculated activation energy barriers, we quantify the Co diffusion length in the TiN metal layer with the help of kinetic Monte Carlo simulations. Such a multi-scale modelling approach yields an exact critical thickness of the metal layer sufficient to prevent the Co diffusion in IC interconnects. We obtain a diffusion length of a maximum of 2 nm for a typical process of thermal annealing at 400 °C for 30 min. Our study thus provides useful physical insights for the Co diffusion in the TiN layer and further quantifies the critical thickness (~2 nm) to which the metal barrier layer can be thinned down for sub-10 nm ICs.

  5. Secondary central nervous system relapse in diffuse large B cell lymphoma in a resource limited country: result from the Thailand nationwide multi-institutional registry.

    PubMed

    Wudhikarn, Kitsada; Bunworasate, Udomsak; Julamanee, Jakrawadee; Lekhakula, Arnuparp; Chuncharunee, Suporn; Niparuck, Pimjai; Ekwattanakit, Supachai; Khuhapinant, Archrob; Norasetthada, Lalita; Nawarawong, Weerasak; Makruasi, Nisa; Kanitsap, Nonglak; Sirijerachai, Chittima; Chansung, Kanchana; Wong, Peerapon; Numbenjapon, Tontanai; Prayongratana, Kannadit; Suwanban, Tawatchai; Wongkhantee, Somchai; Praditsuktavorn, Pannee; Intragumtornchai, Tanin

    2017-01-01

    Secondary central nervous system (CNS) relapse is a serious and fatal complication of diffuse large B cell lymphoma (DLBCL). Data on secondary CNS (SCNS) relapse were mostly obtained from western countries with limited data from developing countries. We analyzed the data of 2034 newly diagnosed DLBCL patients enrolled into the multi-center registry under Thai Lymphoma Study Group from setting. The incidence, September 2006 to December 2013 to represent outcome from a resource limited pattern, management, and outcome of SCNS relapse were described. The 2-year cumulative incidence (CI) of SCNS relapse was 2.7 %. A total of 729, 1024, and 281 patients were classified as low-, intermediate-, and high-risk CNS international prognostic index (CNS-IPI) with corresponding 2-year CI of SCNS relapse of 1.5, 3.1, and 4.6 %, respectively (p < 0.001). Univariate analysis demonstrated advance stage disease, poor performance status, elevated lactate dehydrogenase, presence of B symptoms, more than one extranodal organ involvement, high IPI, and high CNS-IPI group as predictive factors for SCNS relapse. Rituximab exposure and intrathecal chemoprophylaxis offered no protective effect against SCNS relapse. At the time of analysis, six patients were alive. Median OS in SCNS relapsed patients was significantly shorter than relapsed patients without CNS involvement (13.2 vs 22.6 months) (p < 0.001). Primary causes of death were progressive disease (n = 35, 63.6 %) and infection (n = 9, 16.7 %). In conclusion, although the incidence of SCNS relapse in our cohort was low, the prognosis was dismal. Prophylaxis for SCNS involvement was underused even in high-risk patients. Novel approaches for SCNS relapse prophylaxis and managements are warranted.

  6. Radial Diffusion study of the 1 June 2013 CME event using MHD simulations.

    NASA Astrophysics Data System (ADS)

    Patel, M.; Hudson, M.; Wiltberger, M. J.; Li, Z.; Boyd, A. J.

    2016-12-01

    The June 1, 2013 storm was a CME-shock driven geomagnetic storm (Dst = -119 nT) that caused a dropout affecting all radiation belt electron energies measured by the Energetic Particle, Composition and Thermal Plasma Suite (ECT) instrument on Van Allen Probes at higher L-shells following dynamic pressure enhancement in the solar wind. Lower energies (up to about 700 keV) were enhanced by the storm while MeV electrons were depleted throughout the belt. We focus on depletion through radial diffusion caused by the enhanced ULF wave activity due to the CME-shock. This study utilities the Lyon-Fedder-Mobarry (LFM) model, a 3D global magnetospheric simulation code based on the ideal MHD equations, coupled with the Magnetosphere Ionosphere Coupler (MIX) and Rice Convection Model (RCM). The MHD electric and magnetic fields with equations described by Fei et al. [JGR, 2006] are used to calculate radial diffusion coefficients (DLL). These DLL values are input into a radial diffusion code to recreate the dropouts observed by the Van Allen Probes. The importance of understanding the complex role that ULF waves play in radial transport and the effects of CME-driven storms on the relativistic energy electrons in the radiation belts can be accomplished using MHD simulations to obtain diffusion coefficients, initial phase space density and the outer boundary condition from the ECT instrument suite and a radial diffusion model to reproduce observed fluxes which compare favorably with Van Allen Probes ECT measurements.

  7. Two-dimensional Radiative Magnetohydrodynamic Simulations of Partial Ionization in the Chromosphere. II. Dynamics and Energetics of the Low Solar Atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Sykora, Juan; Pontieu, Bart De; Hansteen, Viggo H.

    2017-09-20

    We investigate the effects of interactions between ions and neutrals on the chromosphere and overlying corona using 2.5D radiative MHD simulations with the Bifrost code. We have extended the code capabilities implementing ion–neutral interaction effects using the generalized Ohm’s law, i.e., we include the Hall term and the ambipolar diffusion (Pedersen dissipation) in the induction equation. Our models span from the upper convection zone to the corona, with the photosphere, chromosphere, and transition region partially ionized. Our simulations reveal that the interactions between ionized particles and neutral particles have important consequences for the magnetothermodynamics of these modeled layers: (1) ambipolarmore » diffusion increases the temperature in the chromosphere; (2) sporadically the horizontal magnetic field in the photosphere is diffused into the chromosphere, due to the large ambipolar diffusion; (3) ambipolar diffusion concentrates electrical currents, leading to more violent jets and reconnection processes, resulting in (3a) the formation of longer and faster spicules, (3b) heating of plasma during the spicule evolution, and (3c) decoupling of the plasma and magnetic field in spicules. Our results indicate that ambipolar diffusion is a critical ingredient for understanding the magnetothermodynamic properties in the chromosphere and transition region. The numerical simulations have been made publicly available, similar to previous Bifrost simulations. This will allow the community to study realistic numerical simulations with a wider range of magnetic field configurations and physics modules than previously possible.« less

  8. Two-dimensional Radiative Magnetohydrodynamic Simulations of Partial Ionization in the Chromosphere. II. Dynamics and Energetics of the Low Solar Atmosphere

    NASA Astrophysics Data System (ADS)

    Martínez-Sykora, Juan; De Pontieu, Bart; Carlsson, Mats; Hansteen, Viggo H.; Nóbrega-Siverio, Daniel; Gudiksen, Boris V.

    2017-09-01

    We investigate the effects of interactions between ions and neutrals on the chromosphere and overlying corona using 2.5D radiative MHD simulations with the Bifrost code. We have extended the code capabilities implementing ion-neutral interaction effects using the generalized Ohm’s law, I.e., we include the Hall term and the ambipolar diffusion (Pedersen dissipation) in the induction equation. Our models span from the upper convection zone to the corona, with the photosphere, chromosphere, and transition region partially ionized. Our simulations reveal that the interactions between ionized particles and neutral particles have important consequences for the magnetothermodynamics of these modeled layers: (1) ambipolar diffusion increases the temperature in the chromosphere; (2) sporadically the horizontal magnetic field in the photosphere is diffused into the chromosphere, due to the large ambipolar diffusion; (3) ambipolar diffusion concentrates electrical currents, leading to more violent jets and reconnection processes, resulting in (3a) the formation of longer and faster spicules, (3b) heating of plasma during the spicule evolution, and (3c) decoupling of the plasma and magnetic field in spicules. Our results indicate that ambipolar diffusion is a critical ingredient for understanding the magnetothermodynamic properties in the chromosphere and transition region. The numerical simulations have been made publicly available, similar to previous Bifrost simulations. This will allow the community to study realistic numerical simulations with a wider range of magnetic field configurations and physics modules than previously possible.

  9. Adapting HYDRUS-1D to simulate overland flow and reactive transport during sheet flow deviations

    USDA-ARS?s Scientific Manuscript database

    The HYDRUS-1D code is a popular numerical model for solving the Richards equation for variably-saturated water flow and solute transport in porous media. This code was adapted to solve rather than the Richards equation for subsurface flow the diffusion wave equation for overland flow at the soil sur...

  10. Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel

    NASA Astrophysics Data System (ADS)

    Edelmann, Paul G.

    There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.

  11. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  12. Dictionary Learning on the Manifold of Square Root Densities and Application to Reconstruction of Diffusion Propagator Fields*

    PubMed Central

    Sun, Jiaqi; Xie, Yuchen; Ye, Wenxing; Ho, Jeffrey; Entezari, Alireza; Blackband, Stephen J.

    2013-01-01

    In this paper, we present a novel dictionary learning framework for data lying on the manifold of square root densities and apply it to the reconstruction of diffusion propagator (DP) fields given a multi-shell diffusion MRI data set. Unlike most of the existing dictionary learning algorithms which rely on the assumption that the data points are vectors in some Euclidean space, our dictionary learning algorithm is designed to incorporate the intrinsic geometric structure of manifolds and performs better than traditional dictionary learning approaches when applied to data lying on the manifold of square root densities. Non-negativity as well as smoothness across the whole field of the reconstructed DPs is guaranteed in our approach. We demonstrate the advantage of our approach by comparing it with an existing dictionary based reconstruction method on synthetic and real multi-shell MRI data. PMID:24684004

  13. Directional Multi-scale Modeling of High-Resolution Computed Tomography (HRCT) Lung Images for Diffuse Lung Disease Classification

    NASA Astrophysics Data System (ADS)

    Vo, Kiet T.; Sowmya, Arcot

    A directional multi-scale modeling scheme based on wavelet and contourlet transforms is employed to describe HRCT lung image textures for classifying four diffuse lung disease patterns: normal, emphysema, ground glass opacity (GGO) and honey-combing. Generalized Gaussian density parameters are used to represent the detail sub-band features obtained by wavelet and contourlet transforms. In addition, support vector machines (SVMs) with excellent performance in a variety of pattern classification problems are used as classifier. The method is tested on a collection of 89 slices from 38 patients, each slice of size 512x512, 16 bits/pixel in DICOM format. The dataset contains 70,000 ROIs of those slices marked by experienced radiologists. We employ this technique at different wavelet and contourlet transform scales for diffuse lung disease classification. The technique presented here has best overall sensitivity 93.40% and specificity 98.40%.

  14. Dispersive—diffusive transport of non-sorbed solute in multicomponent solutions

    NASA Astrophysics Data System (ADS)

    Hu, Qinhong; Brusseau, Mark L.

    1995-10-01

    The composition of fuels, mixed-solvent wastes and other contaminants that find their way into the subsurface are frequently chemically complex. The dispersion and diffusion characteristics of multicomponent solutions in soil have rarely been compared to equivalent single-solute systems. The purpose of this work was to examine the diffusive and dispersive transport of single- and multi-component solutions in homogeneous porous media. The miscible displacement technique was used to investigate the transport behavior of 14C-labelled 2,4-dichlorophenoxyacetic acid ( 2,4-D) in two materials for which sorption of 2,4-D was minimal. Comparison of breakthrough curves collected for 2,4-D in single- and multi-component solutions shows that there is little, if any, difference in transport behavior over a wide range of pore-water velocities (70, 7, 0.66 and 0.06 cm h -1). Thus, dispersivities measured with a non-sorbing single-solute solution should be applicable to multicomponent systems.

  15. Detecting the Reconnection Electron Diffusion Regions in Magnetospheric MultiScale mission high resolution data

    NASA Astrophysics Data System (ADS)

    Shimoda, E.; Eriksson, S.; Ahmadi, N.; Ergun, R.; Wilder, F. D.; Goodrich, K.

    2017-12-01

    The Magnetospheric Multi-Scale (MMS) mission resolves the small-scale structure of the Reconnection Electron Diffusion Regions (EDRs) using four spacecraft. We have surveyed two years of MMS data to find the candidates for the EDRs. We searched all the high-resolution segments when Fast Plasma Investigation (FPI) instrument was on. The search criteria are based on measuring the dissipation rate, agyrotropy, a reversal in jet velocity and magnetic field. Once these events were found for MMS1 data, the burst period for the other spacecraft was analyzed. We present our results of the best possible EDR candidates.

  16. Automatic deformable diffusion tensor registration for fiber population analysis.

    PubMed

    Irfanoglu, M O; Machiraju, R; Sammet, S; Pierpaoli, C; Knopp, M V

    2008-01-01

    In this work, we propose a novel method for deformable tensor-to-tensor registration of Diffusion Tensor Images. Our registration method models the distances in between the tensors with Geode-sic-Loxodromes and employs a version of Multi-Dimensional Scaling (MDS) algorithm to unfold the manifold described with this metric. Defining the same shape properties as tensors, the vector images obtained through MDS are fed into a multi-step vector-image registration scheme and the resulting deformation fields are used to reorient the tensor fields. Results on brain DTI indicate that the proposed method is very suitable for deformable fiber-to-fiber correspondence and DTI-atlas construction.

  17. The Kirkendall and Frenkel effects during 2D diffusion process

    NASA Astrophysics Data System (ADS)

    Wierzba, Bartek

    2014-11-01

    The two-dimensional approach for inter-diffusion and voids generation is presented. The voids evolution and growth is discussed. This approach is based on the bi-velocity (Darken) method which combines the Darken and Brenner concepts that the volume velocity is essential in defining the local material velocity in multi-component mixture at non-equilibrium. The model is formulated for arbitrary multi-component two-dimensional systems. It is shown that the voids growth is due to the drift velocity and vacancy migration. The radius of the void can be easily estimated. The distributions of (1) components, (2) vacancy and (3) voids radius over the distance is presented.

  18. Layered materials with improved magnesium intercalation for rechargeable magnesium ion cells

    DOEpatents

    Doe, Robert Ellis; Downie, Craig Michael; Fischer, Christopher; Lane, George Hamilton; Morgan, Dane; Nevin, Josh; Ceder, Gerbrand; Persson, Kristin Aslaug; Eaglesham, David

    2015-10-27

    Electrochemical devices which incorporate cathode materials that include layered crystalline compounds for which a structural modification has been achieved which increases the diffusion rate of multi-valent ions into and out of the cathode materials. Examples in which the layer spacing of the layered electrode materials is modified to have a specific spacing range such that the spacing is optimal for diffusion of magnesium ions are presented. An electrochemical cell comprised of a positive intercalation electrode, a negative metal electrode, and a separator impregnated with a nonaqeuous electrolyte solution containing multi-valent ions and arranged between the positive electrode and the negative electrode active material is described.

  19. A robust multi-shot scan strategy for high-resolution diffusion weighted MRI enabled by multiplexed sensitivity-encoding (MUSE)

    PubMed Central

    Chen, Nan-kuei; Guidon, Arnaud; Chang, Hing-Chiu; Song, Allen W.

    2013-01-01

    Diffusion weighted magnetic resonance imaging (DWI) data have been mostly acquired with single-shot echo-planar imaging (EPI) to minimize motion induced artifacts. The spatial resolution, however, is inherently limited in single-shot EPI, even when the parallel imaging (usually at an acceleration factor of 2) is incorporated. Multi-shot acquisition strategies could potentially achieve higher spatial resolution and fidelity, but they are generally susceptible to motion-induced phase errors among excitations that are exacerbated by diffusion sensitizing gradients, rendering the reconstructed images unusable. It has been shown that shot-to-shot phase variations may be corrected using navigator echoes, but at the cost of imaging throughput. To address these challenges, a novel and robust multi-shot DWI technique, termed multiplexed sensitivity-encoding (MUSE), is developed here to reliably and inherently correct nonlinear shot-to-shot phase variations without the use of navigator echoes. The performance of the MUSE technique is confirmed experimentally in healthy adult volunteers on 3 Tesla MRI systems. This newly developed technique should prove highly valuable for mapping brain structures and connectivities at high spatial resolution for neuroscience studies. PMID:23370063

  20. (U) Ristra Next Generation Code Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hungerford, Aimee L.; Daniel, David John

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming amore » common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.« less

Top