Sample records for discrete ordinates code

  1. A Numerical Investigation of the Extinction of Low Strain Rate Diffusion Flames by an Agent in Microgravity

    NASA Technical Reports Server (NTRS)

    Puri, Ishwar K.

    2004-01-01

    Our goal has been to investigate the influence of both dilution and radiation on the extinction process of nonpremixed flames at low strain rates. Simulations have been performed by using a counterflow code and three radiation models have been included in it, namely, the optically thin, the narrowband, and discrete ordinate models. The counterflow flame code OPPDIFF was modified to account for heat transfer losses by radiation from the hot gases. The discrete ordinate method (DOM) approximation was first suggested by Chandrasekhar for solving problems in interstellar atmospheres. Carlson and Lathrop developed the method for solving multi-dimensional problem in neutron transport. Only recently has the method received attention in the field of heat transfer. Due to the applicability of the discrete ordinate method for thermal radiation problems involving flames, the narrowband code RADCAL was modified to calculate the radiative properties of the gases. A non-premixed counterflow flame was simulated with the discrete ordinate method for radiative emissions. In comparison with two other models, it was found that the heat losses were comparable with the optically thin and simple narrowband model. The optically thin model had the highest heat losses followed by the DOM model and the narrow-band model.

  2. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  3. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  4. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  5. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  6. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  7. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  8. Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls

    NASA Technical Reports Server (NTRS)

    Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk

    1993-01-01

    Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.

  9. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    PubMed

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  10. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  11. Application of the first collision source method to CSNS target station shielding calculation

    NASA Astrophysics Data System (ADS)

    Zheng, Ying; Zhang, Bin; Chen, Meng-Teng; Zhang, Liang; Cao, Bo; Chen, Yi-Xue; Yin, Wen; Liang, Tian-Jiao

    2016-04-01

    Ray effects are an inherent problem of the discrete ordinates method. RAY3D, a functional module of ARES, which is a discrete ordinates code system, employs a semi-analytic first collision source method to mitigate ray effects. This method decomposes the flux into uncollided and collided components, and then calculates them with an analytical method and discrete ordinates method respectively. In this article, RAY3D is validated by the Kobayashi benchmarks and applied to the neutron beamline shielding problem of China Spallation Neutron Source (CSNS) target station. The numerical results of the Kobayashi benchmarks indicate that the solutions of DONTRAN3D with RAY3D agree well with the Monte Carlo solutions. The dose rate at the end of the neutron beamline is less than 10.83 μSv/h in the CSNS target station neutron beamline shutter model. RAY3D can effectively mitigate the ray effects and obtain relatively reasonable results. Supported by Major National S&T Specific Program of Large Advanced Pressurized Water Reactor Nuclear Power Plant (2011ZX06004-007), National Natural Science Foundation of China (11505059, 11575061), and the Fundamental Research Funds for the Central Universities (13QN34).

  12. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  13. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  14. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  15. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Alpan, F. A.; Fischer, G.A.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less

  16. Shielding Analyses for VISION Beam Line at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina; Gallmeier, Franz X

    2014-01-01

    Full-scale neutron and gamma transport analyses were performed to design shielding around the VISION beam line, instrument shielding enclosure, beam stop, secondary shutter including a temporary beam stop for the still closed neighboring beam line to meet requirement is to achieve dose rates below 0.25 mrem/h at 30 cm from the shielding surface. The beam stop and the temporary beam stop analyses were performed with the discrete ordinate code DORT additionally to Monte Carlo analyses with the MCNPX code. Comparison of the results is presented.

  17. Skyshine radiation from a pressurized water reactor containment dome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, W.H.

    1986-06-01

    The radiation dose rates resulting from airborne activities inside a postaccident pressurized water reactor containment are calculated by a discrete ordinates/Monte Carlo combined method. The calculated total dose rates and the skyshine component are presented as a function of distance from the containment at three different elevations for various gamma-ray source energies. The one-dimensional (ANISN code) is used to approximate the skyshine dose rates from the hemisphere dome, and the results are compared favorably to more rigorous results calculated by a three-dimensional Monte Carlo code.

  18. Quasi-heterogeneous efficient 3-D discrete ordinates CANDU calculations using Attila

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preeti, T.; Rulko, R.

    2012-07-01

    In this paper, 3-D quasi-heterogeneous large scale parallel Attila calculations of a generic CANDU test problem consisting of 42 complete fuel channels and a perpendicular to fuel reactivity device are presented. The solution method is that of discrete ordinates SN and the computational model is quasi-heterogeneous, i.e. fuel bundle is partially homogenized into five homogeneous rings consistently with the DRAGON code model used by the industry for the incremental cross-section generation. In calculations, the HELIOS-generated 45 macroscopic cross-sections library was used. This approach to CANDU calculations has the following advantages: 1) it allows detailed bundle (and eventually channel) power calculationsmore » for each fuel ring in a bundle, 2) it allows the exact reactivity device representation for its precise reactivity worth calculation, and 3) it eliminates the need for incremental cross-sections. Our results are compared to the reference Monte Carlo MCNP solution. In addition, the Attila SN method performance in CANDU calculations characterized by significant up scattering is discussed. (authors)« less

  19. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, E.W.

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  1. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  2. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  3. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  4. Radiative Transfer Modeling of a Large Pool Fire by Discrete Ordinates, Discrete Transfer, Ray Tracing, Monte Carlo and Moment Methods

    NASA Technical Reports Server (NTRS)

    Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.

    2004-01-01

    Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).

  5. Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yi, Ce

    With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.

  6. Radiative transfer code SHARM for atmospheric and terrestrial applications

    NASA Astrophysics Data System (ADS)

    Lyapustin, A. I.

    2005-12-01

    An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.

  7. Radiative transfer code SHARM for atmospheric and terrestrial applications.

    PubMed

    Lyapustin, A I

    2005-12-20

    An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.

  8. Shielding analyses: the rabbit vs the turtle?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadhead, B.L.

    1996-12-31

    This paper compares solutions using Monte Carlo and discrete- ordinates methods applied to two actual shielding situations in order to make some general observations concerning the efficiency and advantages/disadvantages of the two approaches. The discrete- ordinates solutions are performed using two-dimensional geometries, while the Monte Carlo approaches utilize three-dimensional geometries with both multigroup and point cross-section data.

  9. Tycho 2: A Proxy Application for Kinetic Transport Sweeps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, Charles Kristopher; Warsa, James S.

    2016-09-14

    Tycho 2 is a proxy application that implements discrete ordinates (SN) kinetic transport sweeps on unstructured, 3D, tetrahedral meshes. It has been designed to be small and require minimal dependencies to make collaboration and experimentation as easy as possible. Tycho 2 has been released as open source software. The software is currently in a beta release with plans for a stable release (version 1.0) before the end of the year. The code is parallelized via MPI across spatial cells and OpenMP across angles. Currently, several parallelization algorithms are implemented.

  10. Ray Effect Mitigation Through Reference Frame Rotation

    DOE PAGES

    Tencer, John

    2016-05-01

    The discrete ordinates method is a popular and versatile technique for solving the radiative transport equation, a major drawback of which is the presence of ray effects. Mitigation of ray effects can yield significantly more accurate results and enhanced numerical stability for combined mode codes. Moreover, when ray effects are present, the solution is seen to be highly dependent upon the relative orientation of the geometry and the global reference frame. It is an undesirable property. A novel ray effect mitigation technique of averaging the computed solution for various reference frame orientations is proposed.

  11. Flow of rarefied gases over two-dimensional bodies

    NASA Technical Reports Server (NTRS)

    Jeng, Duen-Ren; De Witt, Kenneth J.; Keith, Theo G., Jr.; Chung, Chan-Hong

    1989-01-01

    A kinetic-theory analysis is made of the flow of rarefied gases over two-dimensional bodies of arbitrary curvature. The Boltzmann equation simplified by a model collision integral is written in an arbitrary orthogonal curvilinear coordinate system, and solved by means of finite-difference approximation with the discrete ordinate method. A numerical code is developed which can be applied to any two-dimensional submerged body of arbitrary curvature for the flow regimes from free-molecular to slip at transonic Mach numbers. Predictions are made for the case of a right circular cylinder.

  12. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  13. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. Inmore » contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.« less

  14. WWER-1000 core and reflector parameters investigation in the LR-0 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.

    2006-07-01

    Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less

  15. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  16. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  17. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  18. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  19. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  20. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  1. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  2. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G., E-mail: maginot1@llnl.gov; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim E., E-mail: morel@tamu.edu

    This work presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  3. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  4. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE PAGES

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    2016-09-29

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  5. A multi-layer discrete-ordinate method for vector radiative transfer in a vertically-inhomogeneous, emitting and scattering atmosphere. I - Theory. II - Application

    NASA Technical Reports Server (NTRS)

    Weng, Fuzhong

    1992-01-01

    A theory is developed for discretizing the vector integro-differential radiative transfer equation including both solar and thermal radiation. A complete solution and boundary equations are obtained using the discrete-ordinate method. An efficient numerical procedure is presented for calculating the phase matrix and achieving computational stability. With natural light used as a beam source, the Stokes parameters from the model proposed here are compared with the analytical solutions of Chandrasekhar (1960) for a Rayleigh scattering atmosphere. The model is then applied to microwave frequencies with a thermal source, and the brightness temperatures are compared with those from Stamnes'(1988) radiative transfer model.

  6. Comparison of approximate solutions to the phonon Boltzmann transport equation with the relaxation time approximation: Spherical harmonics expansions and the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Christenson, J. G.; Austin, R. A.; Phillips, R. J.

    2018-05-01

    The phonon Boltzmann transport equation is used to analyze model problems in one and two spatial dimensions, under transient and steady-state conditions. New, explicit solutions are obtained by using the P1 and P3 approximations, based on expansions in spherical harmonics, and are compared with solutions from the discrete ordinates method. For steady-state energy transfer, it is shown that analytic expressions derived using the P1 and P3 approximations agree quantitatively with the discrete ordinates method, in some cases for large Knudsen numbers, and always for Knudsen numbers less than unity. However, for time-dependent energy transfer, the PN solutions differ qualitatively from converged solutions obtained by the discrete ordinates method. Although they correctly capture the wave-like behavior of energy transfer at short times, the P1 and P3 approximations rely on one or two wave velocities, respectively, yielding abrupt, step-changes in temperature profiles that are absent when the angular dependence of the phonon velocities is captured more completely. It is shown that, with the gray approximation, the P1 approximation is formally equivalent to the so-called "hyperbolic heat equation." Overall, these results support the use of the PN approximation to find solutions to the phonon Boltzmann transport equation for steady-state conditions. Such solutions can be useful in the design and analysis of devices that involve heat transfer at nanometer length scales, where continuum-scale approaches become inaccurate.

  7. Calculations of the skyshine gamma-ray dose rates from independent spent fuel storage installations (ISFSI) under worst case accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Cramer, S.N.; Knight, J.R.

    1980-09-01

    Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less

  8. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  9. Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)

    NASA Technical Reports Server (NTRS)

    Warman, E. A.; Lindsey, B. A.

    1972-01-01

    The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.

  10. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods.

    PubMed

    Ali, Shehzad; Ronaldson, Sarah

    2012-09-01

    The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.

  11. Computational analysis of Variable Thrust Engine (VTE) performance

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.

    1993-01-01

    The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.

  12. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  13. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zardecki, A.

    The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scatteringmore » effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled.« less

  15. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  16. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  17. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  18. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  19. Functional data analysis: An approach for environmental ordination and matching discrete with continuous observations

    EPA Science Inventory

    Investigators are frequently confronted with data sets that include both discrete observations and extended time series of environmental data that had been collected by autonomous recorders. Evaluating the relationships between these two kinds of data is challenging. A common a...

  20. Two-dimensional HID light source radiative transfer using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Ghrib, Basma; Bouaoun, Mohamed; Elloumi, Hatem

    2016-08-01

    This paper shows the implementation of the Discrete Ordinates Method for handling radiation problems in High Intensity Discharge (HID) lamps. Therefore, we start with presenting this rigorous method for treatment of radiation transfer in a two-dimensional, axisymmetric HID lamp. Furthermore, the finite volume method is used for the spatial discretization of the Radiative Transfer Equation. The atom and electron densities were calculated using temperature profiles established by a 2D semi-implicit finite-element scheme for the solution of conservation equations relative to energy, momentum, and mass. Spectral intensities as a function of position and direction are first calculated, and then axial and radial radiative fluxes are evaluated as well as the net emission coefficient. The results are given for a HID mercury lamp on a line-by-line basis. A particular attention is paid on the 253.7 nm resonance and 546.1 nm green lines.

  1. Reformation of Regulatory Technical Standards for Nuclear Power Generation Equipments in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikio Kurihara; Masahiro Aoki; Yu Maruyama

    2006-07-01

    Comprehensive reformation of the regulatory system has been introduced in Japan in order to apply recent technical progress in a timely manner. 'The Technical Standards for Nuclear Power Generation Equipments', known as the Ordinance No.622) of the Ministry of International Trade and Industry, which is used for detailed design, construction and operating stage of Nuclear Power Plants, was being modified to performance specifications with the consensus codes and standards being used as prescriptive specifications, in order to facilitate prompt review of the Ordinance with response to technological innovation. The activities on modification were performed by the Nuclear and Industrial Safetymore » Agency (NISA), the regulatory body in Japan, with support of the Japan Nuclear Energy Safety Organization (JNES), a technical support organization. The revised Ordinance No.62 was issued on July 1, 2005 and is enforced from January 1 2006. During the period from the issuance to the enforcement, JNES carried out to prepare enforceable regulatory guide which complies with each provisions of the Ordinance No.62, and also made technical assessment to endorse the applicability of consensus codes and standards, in response to NISA's request. Some consensus codes and standards were re-assessed since they were already used in regulatory review of the construction plan submitted by licensee. Other consensus codes and standards were newly assessed for endorsement. In case that proper consensus code or standards were not prepared, details of regulatory requirements were described in the regulatory guide as immediate measures. At the same time, appropriate standards developing bodies were requested to prepare those consensus code or standards. Supplementary note which provides background information on the modification, applicable examples etc. was prepared for convenience to the users of the Ordinance No. 62. This paper shows the activities on modification and the results, following the NISA's presentation at ICONE-13 that introduced the framework of the performance specifications and the modification process of the Ordinance NO. 62. (authors)« less

  2. Blanket activation and afterheat for the Compact Reversed-Field Pinch Reactor

    NASA Astrophysics Data System (ADS)

    Davidson, J. W.; Battat, M. E.

    A detailed assessment has been made of the activation and afterheat for a Compact Reversed-Field Pinch Reactor (CRFPR) blanket using a two-dimensional model that included the limiter, the vacuum ducts, and the manifolds and headers for cooling the limiter and the first and second walls. Region-averaged, multigroup fluxes and prompt gamma-ray/neutron heating rates were calculated using the two-dimensional, discrete-ordinates code TRISM. Activation and depletion calculations were performed with the code FORIG using one-group cross sections generated with the TRISM region-averaged fluxes. Afterheat calculations were performed for regions near the plasma, i.e., the limiter, first wall, etc. assuming a 10-day irradiation. Decay heats were computed for decay periods up to 100 minutes. For the activation calculations, the irradiation period was taken to be one year and blanket activity inventories were computed for decay times to 4 x 10 years. These activities were also calculated as the toxicity-weighted biological hazard potential (BHP).

  3. Multidimensional Modeling of Atmospheric Effects and Surface Heterogeneities on Remote Sensing

    NASA Technical Reports Server (NTRS)

    Gerstl, S. A. W.; Simmer, C.; Zardecki, A. (Principal Investigator)

    1985-01-01

    The overall goal of this project is to establish a modeling capability that allows a quantitative determination of atmospheric effects on remote sensing including the effects of surface heterogeneities. This includes an improved understanding of aerosol and haze effects in connection with structural, angular, and spatial surface heterogeneities. One important objective of the research is the possible identification of intrinsic surface or canopy characteristics that might be invariant to atmospheric perturbations so that they could be used for scene identification. Conversely, an equally important objective is to find a correction algorithm for atmospheric effects in satellite-sensed surface reflectances. The technical approach is centered around a systematic model and code development effort based on existing, highly advanced computer codes that were originally developed for nuclear radiation shielding applications. Computational techniques for the numerical solution of the radiative transfer equation are adapted on the basis of the discrete-ordinates finite-element method which proved highly successful for one and two-dimensional radiative transfer problems with fully resolved angular representation of the radiation field.

  4. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  5. Specular reflection treatment for the 3D radiative transfer equation solved with the discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.

    The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less

  6. The finite element model for the propagation of light in scattering media: a direct method for domains with nonscattering regions.

    PubMed

    Arridge, S R; Dehghani, H; Schweiger, M; Okada, E

    2000-01-01

    We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.

  7. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  8. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  9. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  10. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  11. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  12. PRIM versus CART in subgroup discovery: when patience is harmful.

    PubMed

    Abu-Hanna, Ameen; Nannings, Barry; Dongelmans, Dave; Hasman, Arie

    2010-10-01

    We systematically compare the established algorithms CART (Classification and Regression Trees) and PRIM (Patient Rule Induction Method) in a subgroup discovery task on a large real-world high-dimensional clinical database. Contrary to current conjectures, PRIM's performance was generally inferior to CART's. PRIM often considered "peeling of" a large chunk of data at a value of a relevant discrete ordinal variable unattractive, ultimately missing an important subgroup. This finding has considerable significance in clinical medicine where ordinal scores are ubiquitous. PRIM's utility in clinical databases would increase when global information about (ordinal) variables is better put to use and when the search algorithm keeps track of alternative solutions.

  13. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  14. Outdoor Lighting Ordinances

    NASA Astrophysics Data System (ADS)

    Davis, S.

    2004-05-01

    A principal means to prevent poor exterior lighting practices is a lighting control ordinance. It is an enforceable legal restriction on specific lighting practices that are deemed unacceptable by the government body having jurisdiction. Outdoor lighting codes have proven to be effective at reducing polluting and trespassing light. A well written exterior lighting code will permit all forms of necessary illumination at reasonable intensities, but will demand shielding and other measures to prevent trespass and light pollution. A good code will also apply to all forms of outdoor lighting, including streets, highways, and exterior signs, as well as the lighting on dwellings, commercial and industrial buildings and building sites. A good code can make exceptions for special uses, provided it complies with an effective standard. The IDA Model Lighting Ordinance is a response to these requests. It is intended as an aid to communities that are seeking to take control of their outdoor lighting, to "take back the night" that is being lost to careless and excessive use of night lighting.

  15. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  16. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  17. Measured and calculated fast neutron spectra in a depleted uranium and lithium hydride shielded reactor

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.; Mueller, R. A.

    1973-01-01

    Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.

  18. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  19. Visualization of nuclear particle trajectories in nuclear oil-well logging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Case, C.R.; Chiaramonte, J.M.

    Nuclear oil-well logging measures specific properties of subsurface geological formations as a function of depth in the well. The knowledge gained is used to evaluate the hydrocarbon potential of the surrounding oil field. The measurements are made by lowering an instrument package into an oil well and slowly extracting it at a constant speed. During the extraction phase, neutrons or gamma rays are emitted from the tool, interact with the formation, and scatter back to the detectors located within the tool. Even though only a small percentage of the emitted particles ever reach the detectors, mathematical modeling has been verymore » successful in the accurate prediction of these detector responses. The two dominant methods used to model these devices have been the two-dimensional discrete ordinates method and the three-dimensional Monte Carlo method has routinely been used to investigate the response characteristics of nuclear tools. A special Los Alamos National Laboratory version of their standard MCNP Monte carlo code retains the details of each particle history of later viewing within SABRINA, a companion three-dimensional geometry modeling and debugging code.« less

  20. 24 CFR 941.203 - Design and construction standards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...

  1. 24 CFR 941.203 - Design and construction standards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... national building code, such as Uniform Building Code, Council of American Building Officials Code, or Building Officials Conference of America Code; (2) Applicable State and local laws, codes, ordinances, and... intended to serve. Building design and construction shall strive to encourage in residents a proprietary...

  2. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  3. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  4. 47 CFR 15.214 - Cordless telephones.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... discrete digital codes. Factory-set codes must be continuously varied over at least 256 possible codes as... readily select from among at least 256 possible discrete digital codes. The cordless telephone shall be... fixed code that is continuously varied among at least 256 discrete digital codes as each telephone is...

  5. Building code compliance and enforcement: The experience of San Francisco's residential energy conservation ordinance and California's building standards for new construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vine, E.

    1990-11-01

    As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing to construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. 12 refs., 5 tabs.« less

  6. Forward Monte Carlo Computations of Polarized Microwave Radiation

    NASA Technical Reports Server (NTRS)

    Battaglia, A.; Kummerow, C.

    2000-01-01

    Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.

  7. Microdosimetric investigation of the spectra from YAYOI by use of the Monte Carlo code PHITS.

    PubMed

    Nakao, Minoru; Baba, Hiromi; Oishi, Ayumu; Onizuka, Yoshihiko

    2010-07-01

    The purpose of this study was to obtain the neutron energy spectrum on the surface of the moderator of the Tokyo University reactor YAYOI and to investigate the origins of peaks observed in the neutron energy spectrum by use of the Monte Carlo Code PHITS for evaluating biological studies. The moderator system was modeled with the use of details from an article that reported a calculation result and a measurement result for a neutron spectrum on the surface of the moderator of the reactor. Our calculation results with PHITS were compared to those obtained with the discrete ordinate code ANISN described in the article. In addition, the changes in the neutron spectrum at the boundaries of materials in the moderator system were examined with PHITS. Also, microdosimetric energy distributions of secondary charged particles from neutron recoil or reaction were calculated by use of PHITS and compared with a microdosimetric experiment. Our calculations of the neutron energy spectrum with PHITS showed good agreement with the results of ANISN in terms of the energy and structure of the peaks. However, the microdosimetric dose distribution spectrum with PHITS showed a remarkable discrepancy with the experimental one. The experimental spectrum could not be explained by PHITS when we used neutron beams of two mono-energies.

  8. Regenerating time series from ordinal networks.

    PubMed

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  9. Regenerating time series from ordinal networks

    NASA Astrophysics Data System (ADS)

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  10. Radiative transfer equation accounting for rotational Raman scattering and its solution by the discrete-ordinates method

    NASA Astrophysics Data System (ADS)

    Rozanov, Vladimir V.; Vountas, Marco

    2014-01-01

    Rotational Raman scattering of solar light in Earth's atmosphere leads to the filling-in of Fraunhofer and telluric lines observed in the reflected spectrum. The phenomenological derivation of the inelastic radiative transfer equation including rotational Raman scattering is presented. The different forms of the approximate radiative transfer equation with first-order rotational Raman scattering terms are obtained employing the Cabannes, Rayleigh, and Cabannes-Rayleigh scattering models. The solution of these equations is considered in the framework of the discrete-ordinates method using rigorous and approximate approaches to derive particular integrals. An alternative forward-adjoint technique is suggested as well. A detailed description of the model including the exact spectral matching and a binning scheme that significantly speeds up the calculations is given. The considered solution techniques are implemented in the radiative transfer software package SCIATRAN and a specified benchmark setup is presented to enable readers to compare with own results transparently.

  11. Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method

    NASA Astrophysics Data System (ADS)

    Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.

    2017-04-01

    The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.

  12. Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.

  13. Rarefied gas flow through two-dimensional nozzles

    NASA Technical Reports Server (NTRS)

    De Witt, Kenneth J.; Jeng, Duen-Ren; Keith, Theo G., Jr.; Chung, Chan-Hong

    1989-01-01

    A kinetic theory analysis is made of the flow of a rarefied gas from one reservoir to another through two-dimensional nozzles with arbitrary curvature. The Boltzmann equation simplified by a model collision integral is solved by means of finite-difference approximations with the discrete ordinate method. The physical space is transformed by a general grid generation technique and the velocity space is transformed to a polar coordinate system. A numerical code is developed which can be applied to any two-dimensional passage of complicated geometry for the flow regimes from free-molecular to slip. Numerical values of flow quantities can be calculated for the entire physical space including both inside the nozzle and in the outside plume. Predictions are made for the case of parallel slots and compared with existing literature data. Also, results for the cases of convergent or divergent slots and two-dimensional nozzles with arbitrary curvature at arbitrary knudsen number are presented.

  14. Some Remarks on GMRES for Transport Theory

    NASA Technical Reports Server (NTRS)

    Patton, Bruce W.; Holloway, James Paul

    2003-01-01

    We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace

  15. Spatial coding of ordinal information in short- and long-term memory.

    PubMed

    Ginsburg, Véronique; Gevers, Wim

    2015-01-01

    The processing of numerical information induces a spatial response bias: Faster responses to small numbers with the left hand and faster responses to large numbers with the right hand. Most theories agree that long-term representations underlie this so called SNARC effect (Spatial Numerical Association of Response Codes; Dehaene et al., 1993). However, a spatial response bias was also observed with the activation of temporary position-space associations in working memory (ordinal position effect; van Dijck and Fias, 2011). Items belonging to the beginning of a memorized sequence are responded to faster with the left hand side while items at the end of the sequence are responded to faster with the right hand side. The theoretical possibility was put forward that the SNARC effect is an instance of the ordinal position effect, with the empirical consequence that the SNARC effect and the ordinal position effect cannot be observed simultaneously. In two experiments we falsify this claim by demonstrating that the SNARC effect and the ordinal position effect are not mutually exclusive. Consequently, this suggests that the SNARC effect and the ordinal position effect result from the activation of different representations. We conclude that spatial response biases can result from the activation of both pre-existing positions in long-term memory and from temporary space associations in working memory at the same time.

  16. Graphical aids for visualizing and interpreting patterns in departures from agreement in ordinal categorical observer agreement data.

    PubMed

    Bangdiwala, Shrikant I

    2017-01-01

    When studying the agreement between two observers rating the same n units into the same k discrete ordinal categories, Bangdiwala (1985) proposed using the "agreement chart" to visually assess agreement. This article proposes that often it is more interesting to focus on the patterns of disagreement and visually understanding the departures from perfect agreement. The article reviews the use of graphical techniques for descriptively assessing agreement and disagreements, and also reviews some of the available summary statistics that quantify such relationships.

  17. Newsracks and the First Amendment.

    ERIC Educational Resources Information Center

    Stevens, George E.

    1989-01-01

    Discusses court cases dealing with whether a community may ban newsracks, how much discretion city officials may exercise in regulating vending machines, and what limitations in display and placement are reasonable. Finds that acceptable city ordinances are narrow and content neutral. (RS)

  18. Numerical Model of Multiple Scattering and Emission from Layering Snowpack for Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Liang, Z.

    2002-12-01

    The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.

  19. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  20. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  1. On the use of flux limiters in the discrete ordinates method for 3D radiation calculations in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Godoy, William F.; DesJardin, Paul E.

    2010-05-01

    The application of flux limiters to the discrete ordinates method (DOM), SN, for radiative transfer calculations is discussed and analyzed for 3D enclosures for cases in which the intensities are strongly coupled to each other such as: radiative equilibrium and scattering media. A Newton-Krylov iterative method (GMRES) solves the final systems of linear equations along with a domain decomposition strategy for parallel computation using message passing libraries in a distributed memory system. Ray effects due to angular discretization and errors due to domain decomposition are minimized until small variations are introduced by these effects in order to focus on the influence of flux limiters on errors due to spatial discretization, known as numerical diffusion, smearing or false scattering. Results are presented for the DOM-integrated quantities such as heat flux, irradiation and emission. A variety of flux limiters are compared to "exact" solutions available in the literature, such as the integral solution of the RTE for pure absorbing-emitting media and isotropic scattering cases and a Monte Carlo solution for a forward scattering case. Additionally, a non-homogeneous 3D enclosure is included to extend the use of flux limiters to more practical cases. The overall balance of convergence, accuracy, speed and stability using flux limiters is shown to be superior compared to step schemes for any test case.

  2. 3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure

    NASA Astrophysics Data System (ADS)

    Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.

    2003-04-01

    Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru

  3. MONET: multidimensional radiative cloud scene model

    NASA Astrophysics Data System (ADS)

    Chervet, Patrick

    1999-12-01

    All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.

  4. A Kinetics Model for KrF Laser Amplifiers

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.

    1999-11-01

    A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.

  5. 76 FR 77549 - Colorado River Indian Tribes-Amendment to Health & Safety Code, Article 2. Liquor

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... Health & Safety Code, Article 2. Liquor AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice. SUMMARY: This notice publishes the amendment to the Colorado River Tribal Health and Safety Code, Article... Code, Article 2, Liquor by Ordinance No. 10-03 on December 13, 2010. This notice is published in...

  6. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  7. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  8. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  9. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    PubMed

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Implicitly causality enforced solution of multidimensional transient photon transport equation.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin

    2009-12-21

    A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided.

  11. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  12. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  13. FDDO and DSMC analyses of rarefied gas flow through 2D nozzles

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.

  14. Error Correcting Codes and Related Designs

    DTIC Science & Technology

    1990-09-30

    Theory, IT-37 (1991), 1222-1224. 6. Codes and designs, existence and uniqueness, Discrete Math ., to appear. 7. (with R. Brualdi and N. Cai), Orphan...structure of the first order Reed-Muller codes, Discrete Math ., to appear. 8. (with J. H. Conway and N.J.A. Sloane), The binary self-dual codes of length up...18, 1988. 4. "Codes and Designs," Mathematics Colloquium, Technion, Haifa, Israel, March 6, 1989. 5. "On the Covering Radius of Codes," Discrete Math . Group

  15. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  16. Development of an atmospheric infrared radiation model with high clouds for target detection

    NASA Astrophysics Data System (ADS)

    Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin

    2016-10-01

    In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.

  17. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  18. Normalization of a collimated 14.7 MeV neutron source in a neutron spectrometry system for benchmark experiments

    NASA Astrophysics Data System (ADS)

    Ofek, R.; Tsechanski, A.; Shani, G.

    1988-05-01

    In the present study a method used to normalize a collimated 14.7 MeV neutron beam is introduced. It combined a measurement of the fast neutron scalar flux passing through the collimator, using a copper foil activation, with a neutron transport calculation of the foil activation per unit source neutron, carried out by the discrete-ordinates transport code DOT 4.2. The geometry of the collimated neutron beam is composed of a D-T neutron source positioned 30 cm in front of a 6 cm diameter collimator, through a 120 cm thick paraffin wall. The neutron flux emitted from the D-T source was counted by an NE-213 scintillator, simultaneously with the irradiation of the copper foil. Thus, the determination of the normalization factor of the D-T source is used for an absolute flux calibration of the NE-213 scintillator. The major contributions to the uncertainty in the determination of the normalization factor, and their origins, are discussed.

  19. 14 CFR 93.339 - Requirements for operating in the DC SFRA, including the DC FRZ.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... aircraft in the DC SFRA, including the DC FRZ, the pilot obtains and transmits a discrete transponder code... flight plan by obtaining a discrete transponder code. The flight plan is closed upon landing at an... transmitting an Air Traffic Control-assigned discrete transponder code. (c) When operating an aircraft in the...

  20. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    SENSMG is a tool for computing first-order sensitivities of neutron reaction rates, reaction-rate ratios, leakage, k eff, and α using the PARTISN multigroup discrete-ordinates code. SENSMG computes sensitivities to all of the transport cross sections and data (total, fission, nu, chi, and all scattering moments), two edit cross sections (absorption and capture), and the density for every isotope and energy group. It also computes sensitivities to the mass density for every material and derivatives with respect to all interface locations. The tool can be used for one-dimensional spherical (r) and two-dimensional cylindrical (r-z) geometries. The tool can be used formore » fixed-source and eigenvalue problems. The tool implements Generalized Perturbation Theory (GPT) as discussed by Williams and Stacey. Section II of this report describes the theory behind adjoint-based sensitivities, gives the equations that SENSMG solves, and defines the sensitivities that are output. Section III describes the user interface, including the input file and command line options. Section IV describes the output. Section V gives some notes about the coding that may be of interest. Section VI discusses verification, which is ongoing. Section VII lists needs and ideas for future work. Appendix A lists all of the input files whose results are presented in Sec. VI.« less

  2. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  3. Efficient and Accurate Computation of Non-Negative Anisotropic Group Scattering Cross Sections for Discrete Ordinates and Monte Carlo Radiation Transport

    DTIC Science & Technology

    2002-07-01

    Date Kirk A. Mathews (Advisor) James T. Moore (Dean’s Representative) Charles J. Bridgman (Member...Adler-Adler, and Kalbach -Mann representations of the scatter cross sections that are used for some isotopes in ENDF/B-VI are not included. They are not

  4. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    PubMed

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  5. The underlying number-space mapping among kindergarteners and its relation with early numerical abilities.

    PubMed

    Chan, Winnie Wai Lan; Wong, Terry Tin-Yau

    2016-08-01

    People map numbers onto space. The well-replicated SNARC (spatial-numerical association of response codes) effect indicates that people have a left-sided bias when responding to small numbers and a right-sided bias when responding to large numbers. This study examined whether such spatial codes were tagged to the ordinal or magnitude information of numbers among kindergarteners and whether it was related to early numerical abilities. Based on the traditional magnitude judgment task, we developed two variant tasks-namely the month judgment task and dot judgment task-to elicit ordinal and magnitude processing of numbers, respectively. Results showed that kindergarteners oriented small numbers toward the left side and large numbers toward the right side when processing the ordinal information of numbers in the month judgment task but not when processing the magnitude information in the number judgment task and dot judgment task, suggesting that the left-to-right spatial bias was probably tagged to the ordinal but not magnitude property of numbers. Moreover, the strength of the SNARC effect was not related to early numerical abilities. These findings have important implications for the early spatial representation of numbers and its role in numerical performance among kindergarteners. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Radiative heat transfer in strongly forward scattering media using the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Granate, Pedro; Coelho, Pedro J.; Roger, Maxime

    2016-03-01

    The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta-Eddington phase function and the transport approximation may perform poorly.

  7. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  8. 40 CFR 21.10 - Utilization of the statement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Utilization of the statement. 21.10 Section 21.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.10... law, statute, ordinance, or code (including building, health, or zoning codes). (g) An amended...

  9. College and University Speech Codes in the Aftermath of R.A.V v. City of St. Paul.

    ERIC Educational Resources Information Center

    Fraleigh, Douglas

    In the case of RAV v. City of St. Paul, a teenager was charged with violating the city's Bias-Motivated Crime Ordinance after being accused of burning a cross inside the fenced yard of a black family. In a 9-0 decision, the Supreme Court struck down the St. Paul ordinance, a decision which raised a question as to whether many college and…

  10. Co-ordinated action between youth-care and sports: facilitators and barriers.

    PubMed

    Hermens, Niels; de Langen, Lisanne; Verkooijen, Kirsten T; Koelen, Maria A

    2017-07-01

    In the Netherlands, youth-care organisations and community sports clubs are collaborating to increase socially vulnerable youths' participation in sport. This is rooted in the idea that sports clubs are settings for youth development. As not much is known about co-ordinated action involving professional care organisations and community sports clubs, this study aims to generate insight into facilitators of and barriers to successful co-ordinated action between these two organisations. A cross-sectional study was conducted using in-depth semi-structured qualitative interview data. In total, 23 interviews were held at five locations where co-ordinated action between youth-care and sports takes place. Interviewees were youth-care workers, representatives from community sports clubs, and Care Sport Connectors who were assigned to encourage and manage the co-ordinated action. Using inductive coding procedures, this study shows that existing and good relationships, a boundary spanner, care workers' attitudes, knowledge and competences of the participants, organisational policies and ambitions, and some elements external to the co-ordinated action were reported to be facilitators or barriers. In addition, the participants reported that the different facilitators and barriers influenced the success of the co-ordinated action at different stages of the co-ordinated action. Future research is recommended to further explore the role of boundary spanners in co-ordinated action involving social care organisations and community sports clubs, and to identify what external elements (e.g. events, processes, national policies) are turning points in the formation, implementation and continuation of such co-ordinated action. © 2017 John Wiley & Sons Ltd.

  11. Spectral collocation method with a flexible angular discretization scheme for radiative transfer in multi-layer graded index medium

    NASA Astrophysics Data System (ADS)

    Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming

    2017-05-01

    The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.

  12. Radiation shielding quality assurance

    NASA Astrophysics Data System (ADS)

    Um, Dallsun

    For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.

  13. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  14. A Bayesian hierarchical model for discrete choice data in health care.

    PubMed

    Antonio, Anna Liza M; Weiss, Robert E; Saigal, Christopher S; Dahan, Ely; Crespi, Catherine M

    2017-01-01

    In discrete choice experiments, patients are presented with sets of health states described by various attributes and asked to make choices from among them. Discrete choice experiments allow health care researchers to study the preferences of individual patients by eliciting trade-offs between different aspects of health-related quality of life. However, many discrete choice experiments yield data with incomplete ranking information and sparsity due to the limited number of choice sets presented to each patient, making it challenging to estimate patient preferences. Moreover, methods to identify outliers in discrete choice data are lacking. We develop a Bayesian hierarchical random effects rank-ordered multinomial logit model for discrete choice data. Missing ranks are accounted for by marginalizing over all possible permutations of unranked alternatives to estimate individual patient preferences, which are modeled as a function of patient covariates. We provide a Bayesian version of relative attribute importance, and adapt the use of the conditional predictive ordinate to identify outlying choice sets and outlying individuals with unusual preferences compared to the population. The model is applied to data from a study using a discrete choice experiment to estimate individual patient preferences for health states related to prostate cancer treatment.

  15. 7 CFR 1924.4 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... (available in any FmHA or its successor agency under Public Law 103-354 office). (e) Date of commencement of... accordance with any contract documents and applicable State or local codes and ordinances, and the FmHA or... development. (h) Development standards. Any of the following codes and standards: (1) A standard adopted by Fm...

  16. 7 CFR 1924.4 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (available in any FmHA or its successor agency under Public Law 103-354 office). (e) Date of commencement of... accordance with any contract documents and applicable State or local codes and ordinances, and the FmHA or... development. (h) Development standards. Any of the following codes and standards: (1) A standard adopted by Fm...

  17. 7 CFR 1924.4 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (available in any FmHA or its successor agency under Public Law 103-354 office). (e) Date of commencement of... accordance with any contract documents and applicable State or local codes and ordinances, and the FmHA or... development. (h) Development standards. Any of the following codes and standards: (1) A standard adopted by Fm...

  18. 7 CFR 1924.4 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... (available in any FmHA or its successor agency under Public Law 103-354 office). (e) Date of commencement of... accordance with any contract documents and applicable State or local codes and ordinances, and the FmHA or... development. (h) Development standards. Any of the following codes and standards: (1) A standard adopted by Fm...

  19. 75 FR 9434 - Civil Rights Division, Disability Rights Section; Agency Information Collection Activities Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-02

    ... Rights (or his or her designee) may certify that a State or local building code or similar ordinance that establishes accessibility requirements (Code) meets or exceeds the minimum requirements of the ADA for..., Policy and Planning Staff, Justice Management Division, Patrick Henry Building, Suite 1600, 601 D Street...

  20. 75 FR 27816 - Civil Rights Division, Disability Rights Section; Agency Information Collection Activities Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... certify that a State or local building code or similar ordinance that establishes accessibility requirements (Code) meets or exceeds the minimum requirements of the ADA for accessibility and usability of... Management Division, Patrick Henry Building, Suite 1600, 601 D Street, NW., Washington, DC 20530. Dated: May...

  1. Building Code Compliance and Enforcement: The Experience of SanFrancisco's Residential Energy Conservation Ordinanace and California'sBuildign Standards for New Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vine, E.

    1990-11-01

    As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. Because statewide codes are enforced by local officials, these problems may increase over time as energy standards change and become more complex and as other standards (eg, health and safety codes) remain a higher priority. The California Energy Commission realizes that code enforcement by itself is insufficient and expects that additional educational and technical assistance efforts (eg, manuals, training programs, and toll-free telephone lines) will ameliorate these problems.« less

  2. Green Building Standards

    EPA Pesticide Factsheets

    Many organizations have developed model codes or rating systems that communities may use to develop green building programs or revise building ordinances. Some of the major options are listed on this page.

  3. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  4. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  5. Radiant heat exchange calculations in radiantly heated and cooled enclosures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, K.S.; Zhang, P.

    1995-08-01

    This paper presents the development of a three-dimensional mathematical model to compute the radiant heat exchange between surfaces separated by a transparent and/or opaque medium. The model formulation accommodates arbitrary arrangements of the interior surfaces, as well as arbitrary placement of obstacles within the enclosure. The discrete ordinates radiation model is applied and has the capability to analyze the effect of irregular geometries and diverse surface temperatures and radiative properties. The model is verified by comparing calculated heat transfer rates to heat transfer rates determined from the exact radiosity method for four different enclosures. The four enclosures were selected tomore » provide a wide range of verification. This three-dimensional model based on the discrete ordinates method can be applied to a building to assist the design engineer in sizing a radiant heating system. By coupling this model with a convective and conductive heat transfer model and a thermal comfort model, the comfort levels throughout the room can be easily and efficiently mapped for a given radiant heater location. In addition, objects such as airplanes, trucks, furniture, and partitions can be easily incorporated to determine their effect on the performance of the radiant heating system.« less

  6. Transport analysis of measured neutron leakage spectra from spheres as tests of evaluated high energy cross sections

    NASA Technical Reports Server (NTRS)

    Bogart, D. D.; Shook, D. F.; Fieno, D.

    1973-01-01

    Integral tests of evaluated ENDF/B high-energy cross sections have been made by comparing measured and calculated neutron leakage flux spectra from spheres of various materials. An Am-Be (alpha,n) source was used to provide fast neutrons at the center of the test spheres of Be, CH2, Pb, Nb, Mo, Ta, and W. The absolute leakage flux spectra were measured in the energy range 0.5 to 12 MeV using a calibrated NE213 liquid scintillator neutron spectrometer. Absolute calculations of the spectra were made using version 3 ENDF/B cross sections and an S sub n discrete ordinates multigroup transport code. Generally excellent agreement was obtained for Be, CH2, Pb, and Mo, and good agreement was observed for Nb although discrepancies were observed for some energy ranges. Poor comparative results, obtained for Ta and W, are attributed to unsatisfactory nonelastic cross sections. The experimental sphere leakage flux spectra are tabulated and serve as possible benchmarks for these elements against which reevaluated cross sections may be tested.

  7. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  8. 14 CFR 93.341 - Aircraft operations in the DC FRZ.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...-assigned discrete transponder code. The pilot must monitor VHF frequency 121.5 or UHF frequency 243.0. (d... authorization must file and activate an IFR or a DC FRZ or a DC SFRA flight plan and transmit a discrete transponder code assigned by an Air Traffic Control facility. Aircraft must transmit the discrete transponder...

  9. AMPX: a modular code system for generating coupled multigroup neutron-gamma libraries from ENDF/B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Lucius, J.L.; Petrie, L.M.

    1976-03-01

    AMPX is a modular system for producing coupled multigroup neutron-gamma cross section sets. Basic neutron and gamma cross-section data for AMPX are obtained from ENDF/B libraries. Most commonly used operations required to generate and collapse multigroup cross-section sets are provided in the system. AMPX is flexibly dimensioned; neutron group structures, and gamma group structures, and expansion orders to represent anisotropic processes are all arbitrary and limited only by available computer core and budget. The basic processes provided will (1) generate multigroup neutron cross sections; (2) generate multigroup gamma cross sections; (3) generate gamma yields for gamma-producing neutron interactions; (4) combinemore » neutron cross sections, gamma cross sections, and gamma yields into final ''coupled sets''; (5) perform one-dimensional discrete ordinates transport or diffusion theory calculations for neutrons and gammas and, on option, collapse the cross sections to a broad-group structure, using the one-dimensional results as weighting functions; (6) plot cross sections, on option, to facilitate the ''evaluation'' of a particular multigroup set of data; (7) update and maintain multigroup cross section libraries in such a manner as to make it not only easy to combine new data with previously processed data but also to do it in a single pass on the computer; and (8) output multigroup cross sections in convenient formats for other codes. (auth)« less

  10. Multiprocessing MCNP on an IBN RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-01-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less

  11. Multiprocessing MCNP on an IBM RS/6000 cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, G.W.; West, J.T.

    1993-03-01

    The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less

  12. 34 CFR 395.35 - Terms of permit.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... health, sanitation, and building codes or ordinances. (e) The permit shall further provide that... to the State licensing agency for normal cleaning, maintenance, and repair of the building structure...

  13. 34 CFR 395.35 - Terms of permit.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... health, sanitation, and building codes or ordinances. (e) The permit shall further provide that... to the State licensing agency for normal cleaning, maintenance, and repair of the building structure...

  14. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  15. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE PAGES

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...

    2017-10-03

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  16. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  17. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  18. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-10-01

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.

  19. Green Infrastructure Barriers and Opportunities in Dallas, Texas

    EPA Pesticide Factsheets

    This report will assist other municipalities with recognizing barriers and inconsistencies in municipal codes and ordinances which may be impeding the implementation of green infrastructure practices in their communities.

  20. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  1. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  2. Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression

    DTIC Science & Technology

    2016-01-01

    discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED  Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition  Unremarkable

  3. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  4. Clouds in the atmospheres of extrasolar planets. IV. On the scattering greenhouse effect of CO2 ice particles: Numerical radiative transfer studies

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.; Patzer, A. B. C.; Rauer, H.

    2013-09-01

    Context. Owing to their wavelength-dependent absorption and scattering properties, clouds have a strong impact on the climate of planetary atmospheres. The potential greenhouse effect of CO2 ice clouds in the atmospheres of terrestrial extrasolar planets is of particular interest because it might influence the position and thus the extension of the outer boundary of the classic habitable zone around main sequence stars. Such a greenhouse effect, however, is a complicated function of the CO2 ice particles' optical properties. Aims: We study the radiative effects of CO2 ice particles obtained by different numerical treatments to solve the radiative transfer equation. To determine the effectiveness of the scattering greenhouse effect caused by CO2 ice clouds, the radiative transfer calculations are performed over the relevant wide range of particle sizes and optical depths, employing different numerical methods. Methods: We used Mie theory to calculate the optical properties of particle polydispersion. The radiative transfer calculations were done with a high-order discrete ordinate method (DISORT). Two-stream radiative transfer methods were used for comparison with previous studies. Results: The comparison between the results of a high-order discrete ordinate method and simpler two-stream approaches reveals large deviations in terms of a potential scattering efficiency of the greenhouse effect. The two-stream methods overestimate the transmitted and reflected radiation, thereby yielding a higher scattering greenhouse effect. For the particular case of a cool M-type dwarf, the CO2 ice particles show no strong effective scattering greenhouse effect by using the high-order discrete ordinate method, whereas a positive net greenhouse effect was found for the two-stream radiative transfer schemes. As a result, previous studies of the effects of CO2 ice clouds using two-stream approximations overrated the atmospheric warming caused by the scattering greenhouse effect. Consequently, the scattering greenhouse effect of CO2 ice particles seems to be less effective than previously estimated. In general, higher order radiative transfer methods are needed to describe the effects of CO2 ice clouds accurately as indicated by our numerical radiative transfer studies.

  5. A Fast Optimization Method for General Binary Code Learning.

    PubMed

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  6. Discrete Ramanujan transform for distinguishing the protein coding regions from other regions.

    PubMed

    Hua, Wei; Wang, Jiasong; Zhao, Jian

    2014-01-01

    Based on the study of Ramanujan sum and Ramanujan coefficient, this paper suggests the concepts of discrete Ramanujan transform and spectrum. Using Voss numerical representation, one maps a symbolic DNA strand as a numerical DNA sequence, and deduces the discrete Ramanujan spectrum of the numerical DNA sequence. It is well known that of discrete Fourier power spectrum of protein coding sequence has an important feature of 3-base periodicity, which is widely used for DNA sequence analysis by the technique of discrete Fourier transform. It is performed by testing the signal-to-noise ratio at frequency N/3 as a criterion for the analysis, where N is the length of the sequence. The results presented in this paper show that the property of 3-base periodicity can be only identified as a prominent spike of the discrete Ramanujan spectrum at period 3 for the protein coding regions. The signal-to-noise ratio for discrete Ramanujan spectrum is defined for numerical measurement. Therefore, the discrete Ramanujan spectrum and the signal-to-noise ratio of a DNA sequence can be used for distinguishing the protein coding regions from the noncoding regions. All the exon and intron sequences in whole chromosomes 1, 2, 3 and 4 of Caenorhabditis elegans have been tested and the histograms and tables from the computational results illustrate the reliability of our method. In addition, we have analyzed theoretically and gotten the conclusion that the algorithm for calculating discrete Ramanujan spectrum owns the lower computational complexity and higher computational accuracy. The computational experiments show that the technique by using discrete Ramanujan spectrum for classifying different DNA sequences is a fast and effective method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.

    PubMed

    Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong

    2017-05-01

    Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

  8. Verification of a neutronic code for transient analysis in reactors with Hex-z geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Pintor, S.; Verdu, G.; Ginestar, D.

    Due to the geometry of the fuel bundles, to simulate reactors such as VVER reactors it is necessary to develop methods that can deal with hexagonal prisms as basic elements of the spatial discretization. The main features of a code based on a high order finite element method for the spatial discretization of the neutron diffusion equation and an implicit difference method for the time discretization of this equation are presented and the performance of the code is tested solving the first exercise of the AER transient benchmark. The obtained results are compared with the reference results of the benchmarkmore » and with the results provided by PARCS code. (authors)« less

  9. Introduction to the Theory of Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Buglia, J. J.

    1986-01-01

    The fundamental physical and mathematical principles governing the transmission of radiation through the atmosphere are presented, with emphasis on the scattering of visible and near-IR radiation. The classical two-stream, thin-atmosphere, and Eddington approximations, along with some of their offspring, are developed in detail, along with the discrete ordinates method of Chandrasekhar. The adding and doubling methods are discussed from basic principles, and references for further reading are suggested.

  10. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  11. Associations between county and municipality zoning ordinances and access to fruit and vegetable outlets in rural North Carolina, 2012.

    PubMed

    Mayo, Mariel Leah; Pitts, Stephanie B Jilcott; Chriqui, Jamie F

    2013-12-05

    Zoning ordinances and land-use plans may influence the community food environment by determining placement and access to food outlets, which subsequently support or hinder residents' attempts to eat healthfully. The objective of this study was to examine associations between healthful food zoning scores as derived from information on local zoning ordinances, county demographics, and residents' access to fruit and vegetable outlets in rural northeastern North Carolina. From November 2012 through March 2013, county and municipality zoning ordinances were identified and double-coded by using the Bridging the Gap food code/policy audit form. A healthful food zoning score was derived by assigning points for the allowed use of fruit and vegetable outlets. Pearson coefficients were calculated to examine correlations between the healthful food zoning score, county demographics, and the number of fruit and vegetable outlets. In March and April 2013, qualitative interviews were conducted among county and municipal staff members knowledgeable about local zoning and planning to ascertain implementation and enforcement of zoning to support fruit and vegetable outlets. We found a strong positive correlation between healthful food zoning scores and the number of fruit and vegetable outlets in 13 northeastern North Carolina counties (r = 0.66, P = .01). Major themes in implementation and enforcement of zoning to support fruit and vegetable outlets included strict enforcement versus lack of enforcement of zoning regulations. Increasing the range of permitted uses in zoning districts to include fruit and vegetable outlets may increase access to healthful fruit and vegetable outlets in rural communities.

  12. Associations Between County and Municipality Zoning Ordinances and Access to Fruit And Vegetable Outlets in Rural North Carolina, 2012

    PubMed Central

    Mayo, Mariel Leah; Chriqui, Jamie F.

    2013-01-01

    Introduction Zoning ordinances and land-use plans may influence the community food environment by determining placement and access to food outlets, which subsequently support or hinder residents’ attempts to eat healthfully. The objective of this study was to examine associations between healthful food zoning scores as derived from information on local zoning ordinances, county demographics, and residents’ access to fruit and vegetable outlets in rural northeastern North Carolina. Methods From November 2012 through March 2013, county and municipality zoning ordinances were identified and double-coded by using the Bridging the Gap food code/policy audit form. A healthful food zoning score was derived by assigning points for the allowed use of fruit and vegetable outlets. Pearson coefficients were calculated to examine correlations between the healthful food zoning score, county demographics, and the number of fruit and vegetable outlets. In March and April 2013, qualitative interviews were conducted among county and municipal staff members knowledgeable about local zoning and planning to ascertain implementation and enforcement of zoning to support fruit and vegetable outlets. Results We found a strong positive correlation between healthful food zoning scores and the number of fruit and vegetable outlets in 13 northeastern North Carolina counties (r = 0.66, P = .01). Major themes in implementation and enforcement of zoning to support fruit and vegetable outlets included strict enforcement versus lack of enforcement of zoning regulations. Conclusion Increasing the range of permitted uses in zoning districts to include fruit and vegetable outlets may increase access to healthful fruit and vegetable outlets in rural communities. PMID:24309091

  13. VVER-440 and VVER-1000 reactor dosimetry benchmark - BUGLE-96 versus ALPAN VII.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duo, J. I.

    2011-07-01

    Document available in abstract form only, full text of document follows: Analytical results of the vodo-vodyanoi energetichesky reactor-(VVER-) 440 and VVER-1000 reactor dosimetry benchmarks developed from engineering mockups at the Nuclear Research Inst. Rez LR-0 reactor are discussed. These benchmarks provide accurate determination of radiation field parameters in the vicinity and over the thickness of the reactor pressure vessel. Measurements are compared to calculated results with two sets of tools: TORT discrete ordinates code and BUGLE-96 cross-section library versus the newly Westinghouse-developed RAPTOR-M3G and ALPAN VII.0. The parallel code RAPTOR-M3G enables detailed neutron distributions in energy and space in reducedmore » computational time. ALPAN VII.0 cross-section library is based on ENDF/B-VII.0 and is designed for reactor dosimetry applications. It uses a unique broad group structure to enhance resolution in thermal-neutron-energy range compared to other analogous libraries. The comparison of fast neutron (E > 0.5 MeV) results shows good agreement (within 10%) between BUGLE-96 and ALPAN VII.O libraries. Furthermore, the results compare well with analogous results of participants of the REDOS program (2005). Finally, the analytical results for fast neutrons agree within 15% with the measurements, for most locations in all three mockups. In general, however, the analytical results underestimate the attenuation through the reactor pressure vessel thickness compared to the measurements. (authors)« less

  14. Order information in verbal working memory shifts the subjective midpoint in both the line bisection and the landmark tasks.

    PubMed

    Antoine, Sophie; Ranzini, Mariagrazia; Gebuis, Titia; van Dijck, Jean-Philippe; Gevers, Wim

    2017-10-01

    A largely substantiated view in the domain of working memory is that the maintenance of serial order is achieved by generating associations of each item with an independent representation of its position, so-called position markers. Recent studies reported that the ordinal position of an item in verbal working memory interacts with spatial processing. This suggests that position markers might be spatial in nature. However, these interactions were so far observed in tasks implying a clear binary categorization of space (i.e., with left and right responses or targets). Such binary categorizations leave room for alternative interpretations, such as congruency between non-spatial categorical codes for ordinal position (e.g., begin and end) and spatial categorical codes for response (e.g., left and right). Here we discard this interpretation by providing evidence that this interaction can also be observed in a task that draws upon a continuous processing of space, the line bisection task. Specifically, bisections are modulated by ordinal position in verbal working memory, with lines bisected more towards the right after retrieving items from the end compared to the beginning of the memorized sequence. This supports the idea that position markers are intrinsically spatial in nature.

  15. An Application of Discrete Mathematics to Coding Theory.

    ERIC Educational Resources Information Center

    Donohoe, L. Joyce

    1992-01-01

    Presents a public-key cryptosystem application to introduce students to several topics in discrete mathematics. A computer algorithms using recursive methods is presented to solve a problem in which one person wants to send a coded message to a second person while keeping the message secret from a third person. (MDH)

  16. PROTEUS-SN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shemon, Emily R.; Smith, Micheal A.; Lee, Changho

    2016-02-16

    PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less

  17. Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model

    NASA Astrophysics Data System (ADS)

    Dordevich, Milorad C. W.

    This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.

  18. Infant differential behavioral responding to discrete emotions.

    PubMed

    Walle, Eric A; Reschke, Peter J; Camras, Linda A; Campos, Joseph J

    2017-10-01

    Emotional communication regulates the behaviors of social partners. Research on individuals' responding to others' emotions typically compares responses to a single negative emotion compared with responses to a neutral or positive emotion. Furthermore, coding of such responses routinely measure surface level features of the behavior (e.g., approach vs. avoidance) rather than its underlying function (e.g., the goal of the approach or avoidant behavior). This investigation examined infants' responding to others' emotional displays across 5 discrete emotions: joy, sadness, fear, anger, and disgust. Specifically, 16-, 19-, and 24-month-old infants observed an adult communicate a discrete emotion toward a stimulus during a naturalistic interaction. Infants' responses were coded to capture the function of their behaviors (e.g., exploration, prosocial behavior, and security seeking). The results revealed a number of instances indicating that infants use different functional behaviors in response to discrete emotions. Differences in behaviors across emotions were clearest in the 24-month-old infants, though younger infants also demonstrated some differential use of behaviors in response to discrete emotions. This is the first comprehensive study to identify differences in how infants respond with goal-directed behaviors to discrete emotions. Additionally, the inclusion of a function-based coding scheme and interpersonal paradigms may be informative for future emotion research with children and adults. Possible developmental accounts for the observed behaviors and the benefits of coding techniques emphasizing the function of social behavior over their form are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    DTIC Science & Technology

    2016-09-01

    is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1

  20. Analytic approach to photoelectron transport.

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.

    1972-01-01

    The equation governing the transport of photoelectrons in the ionosphere is shown to be equivalent to the equation of radiative transfer. In the single-energy approximation this equation is solved in closed form by the method of discrete ordinates for isotropic scattering and for a single-constituent atmosphere. The results include prediction of the angular distribution of photoelectrons at all altitudes and, in particular, the angular distribution of the escape flux. The implications of these solutions in real atmosphere calculations are discussed.

  1. 77 FR 15122 - Te-Moak Tribe of Western Shoshone- Ordinance Pursuant to United States Code, Legalizing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-14

    ... alcoholic beverage business is seeking to be licensed. (e) No such license shall be transferred without the..., Chairman, Te-Moak Tribe of Western Shoshone ATTEST: /s/ Vera Johnny, Acting Recording Secretary Te-Moak...

  2. Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    PubMed Central

    Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J

    2008-01-01

    Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  4. 24 CFR 880.207 - Property standards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... State and local laws, codes, ordinances and regulations. (g) Smoke detectors—(1) Performance requirement... smoke detector, in proper working condition, on each level of the unit. If the unit is occupied by hearing-impaired persons, smoke detectors must have an alarm system, designed for hearing-impaired persons...

  5. 24 CFR 880.207 - Property standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... State and local laws, codes, ordinances and regulations. (g) Smoke detectors—(1) Performance requirement... smoke detector, in proper working condition, on each level of the unit. If the unit is occupied by hearing-impaired persons, smoke detectors must have an alarm system, designed for hearing-impaired persons...

  6. 24 CFR 880.207 - Property standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... State and local laws, codes, ordinances and regulations. (g) Smoke detectors—(1) Performance requirement... smoke detector, in proper working condition, on each level of the unit. If the unit is occupied by hearing-impaired persons, smoke detectors must have an alarm system, designed for hearing-impaired persons...

  7. 24 CFR 880.207 - Property standards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... State and local laws, codes, ordinances and regulations. (g) Smoke detectors—(1) Performance requirement... smoke detector, in proper working condition, on each level of the unit. If the unit is occupied by hearing-impaired persons, smoke detectors must have an alarm system, designed for hearing-impaired persons...

  8. 24 CFR 880.207 - Property standards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... State and local laws, codes, ordinances and regulations. (g) Smoke detectors—(1) Performance requirement... smoke detector, in proper working condition, on each level of the unit. If the unit is occupied by hearing-impaired persons, smoke detectors must have an alarm system, designed for hearing-impaired persons...

  9. Geometric Nonlinear Computation of Thin Rods and Shells

    NASA Astrophysics Data System (ADS)

    Grinspun, Eitan

    2011-03-01

    We develop simple, fast numerical codes for the dynamics of thin elastic rods and shells, by exploiting the connection between physics, geometry, and computation. By building a discrete mechanical picture from the ground up, mimicking the axioms, structures, and symmetries of the smooth setting, we produce numerical codes that not only are consistent in a classical sense, but also reproduce qualitative, characteristic behavior of a physical system----such as exact preservation of conservation laws----even for very coarse discretizations. As two recent examples, we present discrete computational models of elastic rods and shells, with straightforward extensions to the viscous setting. Even at coarse discretizations, the resulting simulations capture characteristic geometric instabilities. The numerical codes we describe are used in experimental mechanics, cinema, and consumer software products. This is joint work with Miklós Bergou, Basile Audoly, Max Wardetzky, and Etienne Vouga. This research is supported in part by the Sloan Foundation, the NSF, Adobe, Autodesk, Intel, the Walt Disney Company, and Weta Digital.

  10. Snow Microwave Radiative Transfer (SMRT): A new model framework to simulate snow-microwave interactions for active and passive remote sensing applications

    NASA Astrophysics Data System (ADS)

    Loewe, H.; Picard, G.; Sandells, M. J.; Mätzler, C.; Kontu, A.; Dumont, M.; Maslanka, W.; Morin, S.; Essery, R.; Lemmetyinen, J.; Wiesmann, A.; Floury, N.; Kern, M.

    2016-12-01

    Forward modeling of snow-microwave interactions is widely used to interpret microwave remote sensing data from active and passive sensors. Though different models are yet available for that purpose, a joint effort has been undertaken in the past two years within the ESA Project "Microstructural origin of electromagnetic signatures in microwave remote sensing of snow". The new Snow Microwave Radiative Transfer (SMRT) model primarily facilitates a flexible treatment of snow microstructure as seen by X-ray tomography and seeks to unite respective advantages of existing models. In its main setting, SMRT considers radiation transfer in a plane-parallel snowpack consisting of homogeneous layers with a layer microstructure represented by an autocorrelation function. The electromagnetic model, which underlies permittivity, absorption and scattering calculations within a layer, is based on the improved Born approximation. The resulting vector-radiative transfer equation in the snowpack is solved using spectral decomposition of the discrete ordinates discretization. SMRT is implemented in Python and employs an object-oriented, modular design which intends to i) provide an intuitive and fail-safe API for basic users ii) enable efficient community developments for extensions (e.g. for improvements of sub-models for microstructure, permittivity, soil or interface reflectivity) from advanced users and iii) encapsulate the numerical core which is maintained by the developers. For cross-validation and inter-model comparison, SMRT implements various ingredients of existing models as selectable options (e.g. Rayleigh or DMRT-QCA phase functions) and shallow wrappers to invoke legacy model code directly (MEMLS, DMRT-QMS, HUT). In this paper we give an overview of the model components and show examples and results from different validation schemes.

  11. Year End Progress Report on Rattlesnake Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yaqi; DeHart, Mark David; Gleicher, Frederick Nathan

    Rattlesnake is a MOOSE-based radiation transport application developed at INL to support modern multi-physics simulations. At the beginning of the last year, Rattlesnake was able to perform steady-state, transient and eigenvalue calculations for the multigroup radiation transport equations. Various discretization schemes, including continuous finite element method (FEM) with discrete ordinates method (SN) and spherical harmonics expansion method (PN) for the self-adjoint angular flux (SAAF) formulation, continuous FEM (CFEM) with SN for the least square (LS) formulation, diffusion approximation with CFEM and discontinuous FEM (DFEM), have been implemented. A separate toolkit, YAKXS, for multigroup cross section management was developed to supportmore » Rattlesnake calculations with feedback both from changes in the field variables, such as fuel temperature, coolant density, and etc., and in isotope inventory. The framework for doing nonlinear diffusion acceleration (NDA) within Rattlesnake has been set up, and both NDA calculations with SAAF-SN-CFEM scheme and Monte Carlo with OpenMC have been performed. It was also used for coupling BISON and RELAP-7 for the full-core multiphysics simulations. Within the last fiscal year, significant improvements have been made in Rattlesnake. Rattlesnake development was migrated into our internal GITLAB development environment at the end of year 2014. Since then total 369 merge requests has been accepted into Rattlesnake. It is noted that the MOOSE framework that Rattlesnake is based on is under continuous developments. Improvements made in MOOSE can improve the Rattlesnake. It is acknowledged that MOOSE developers spent efforts on patching Rattlesnake for the improvements made on the framework side. This report will not cover the code restructuring for better readability and modularity and documentation improvements, which we have spent tremendous effort on. It only details some of improvements in the following sections.« less

  12. Benchmark gamma-ray skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, R.R.; Shultis, J.K.; Faw, R.E.

    1982-01-01

    A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.

  13. S4 solution of the transport equation for eigenvalues using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Öztürk, Hakan; Bülbül, Ahmet

    2017-09-01

    Numerical solution of the transport equation for monoenergetic neutrons scattered isotropically through the medium of a finite homogeneous slab is studied for the determination of the eigenvalues. After obtaining the discrete ordinates form of the transport equation, separated homogeneous and particular solutions are formed and then the eigenvalues are calculated using the Gauss-Legendre quadrature set. Then, the calculated eigenvalues for various values of the c0, the mean number of secondary neutrons per collision, are given in the tables.

  14. Extending radiative transfer models by use of Bayes rule. [in atmospheric science

    NASA Technical Reports Server (NTRS)

    Whitney, C.

    1977-01-01

    This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.

  15. Drekar v.2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seefeldt, Ben; Sondak, David; Hensinger, David M.

    Drekar is an application code that solves partial differential equations for fluids that can be optionally coupled to electromagnetics. Drekar solves low-mach compressible and incompressible computational fluid dynamics (CFD), compressible and incompressible resistive magnetohydrodynamics (MHD), and multiple species plasmas interacting with electromagnetic fields. Drekar discretization technology includes continuous and discontinuous finite element formulations, stabilized finite element formulations, mixed integration finite element bases (nodal, edge, face, volume) and an initial arbitrary Lagrangian Eulerian (ALE) capability. Drekar contains the implementation of the discretized physics and leverages the open source Trilinos project for both parallel solver capabilities and general finite element discretization tools.more » The code will be released open source under a BSD license. The code is used for fundamental research for simulation of fluids and plasmas on high performance computing environments.« less

  16. (U) Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses Using Ray-Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less

  17. 44 CFR 206.118 - Disposal of housing units.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY DISASTER ASSISTANCE FEDERAL DISASTER ASSISTANCE Federal Assistance to..., has a site that complies with local codes and ordinances and part 9 of this Title. (ii) Adjustment to... providing temporary housing to disaster victims in major disasters and emergencies. As a condition of the...

  18. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  19. Review of finite fields: Applications to discrete Fourier, transforms and Reed-Solomon coding

    NASA Technical Reports Server (NTRS)

    Wong, J. S. L.; Truong, T. K.; Benjauthrit, B.; Mulhall, B. D. L.; Reed, I. S.

    1977-01-01

    An attempt is made to provide a step-by-step approach to the subject of finite fields. Rigorous proofs and highly theoretical materials are avoided. The simple concepts of groups, rings, and fields are discussed and developed more or less heuristically. Examples are used liberally to illustrate the meaning of definitions and theories. Applications include discrete Fourier transforms and Reed-Solomon coding.

  20. 77 FR 39731 - Swinomish Indian Tribal Community-Title 15, Chapter 4: Liquor Legalization, Regulation and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Swinomish Indian Tribal Community--Title 15.... ACTION: Notice. SUMMARY: This notice publishes Title 15, Chapter 4: Liquor Legalization, Regulation and... Indian Tribal Community Senate adopted Ordinance No. 296, Enacting Swinomish Tribal Code Title 15...

  1. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  2. 3D numerical modelling of the propagation of radiative intensity through a X-ray tomographied ligament

    NASA Astrophysics Data System (ADS)

    Le Hardy, David; Badri, Mohd Afeef; Rousseau, Benoit; Chupin, Sylvain; Rochais, Denis; Favennec, Yann

    2017-06-01

    In order to explain the macroscopic radiative behaviour of an open-cell ceramic foam, knowledge of its solid phase distribution in space and the radiative contributions by this solid phase is required. The solid phase in an open-cell ceramic foam is arranged as a porous skeleton, which is itself composed of an interconnected network of ligament. Typically, ligaments being based on the assembly of grains more or less compacted, exhibit an anisotropic geometry with a concave cross section having a lateral size of one hundred microns. Therefore, ligaments are likely to emit, absorb and scatter thermal radiation. This framework explains why experimental investigations at this scale must be developed to extract accurate homogenized radiative properties regardless the shape and size of ligaments. To support this development, a 3D numerical investigation of the radiative intensity propagation through a real world ligament, beforehand scanned by X-Ray micro-tomography, is presented in this paper. The Radiative Transfer Equation (RTE), applied to the resulting meshed volume, is solved by combining Discrete Ordinate Method (DOM) and Streamline upwind Petrov-Garlekin (SUPG) numerical scheme. A particular attention is paid to propose an improved discretization procedure (spatial and angular) based on ordinate parallelization with the aim to reach fast convergence. Towards the end of this article, we present the effects played by the local radiative properties of three ceramic materials (silicon carbide, alumina and zirconia), which are often used for designing open-cell refractory ceramic foams.

  3. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  4. Estimation of median human lethal radiation dose computed from data on occupants of reinforced concrete structures in Nagasaki, Japan.

    PubMed

    Levin, S G; Young, R W; Stohler, R L

    1992-11-01

    This paper presents an estimate of the median lethal dose for humans exposed to total-body irradiation and not subsequently treated for radiation sickness. The median lethal dose was estimated from calculated doses to young adults who were inside two reinforced concrete buildings that remained standing in Nagasaki after the atomic detonation. The individuals in this study, none of whom have previously had calculated doses, were identified from a detailed survey done previously. Radiation dose to the bone marrow, which was taken as the critical radiation site, was calculated for each individual by the Engineering Physics and Mathematics Division of the Oak Ridge National Laboratory using a new three-dimensional discrete-ordinates radiation transport code that was developed and validated for this study using the latest site geometry, radiation yield, and spectra data. The study cohort consisted of 75 individuals who either survived > 60 d or died between the second and 60th d postirradiation due to radiation injury, without burns or other serious injury. Median lethal dose estimates were calculated using both logarithmic (2.9 Gy) and linear (3.4 Gy) dose scales. Both calculations, which met statistical validity tests, support previous estimates of the median lethal dose based solely on human data, which cluster around 3 Gy.

  5. Extension of the Bgl Broad Group Cross Section Library

    NASA Astrophysics Data System (ADS)

    Kirilova, Desislava; Belousov, Sergey; Ilieva, Krassimira

    2009-08-01

    The broad group cross-section libraries BUGLE and BGL are applied for reactor shielding calculation using the DOORS package based on discrete ordinates method and multigroup approximation of the neutron cross-sections. BUGLE and BGL libraries are problem oriented for PWR or VVER type of reactors respectively. They had been generated by collapsing the problem independent fine group library VITAMIN-B6 applying PWR and VVER one-dimensional radial model of the reactor middle plane using the SCALE software package. The surveillance assemblies (SA) of VVER-1000/320 are located on the baffle above the reactor core upper edge in a region where geometry and materials differ from those of the middle plane and the neutron field gradient is very high which would result in a different neutron spectrum. That is why the application of the fore-mentioned libraries for the neutron fluence calculation in the region of SA could lead to an additional inaccuracy. This was the main reason to study the necessity for an extension of the BGL library with cross-sections appropriate for the SA region. Comparative analysis of the neutron spectra of the SA region calculated by the VITAMIN-B6 and BGL libraries using the two-dimensional code DORT have been done with purpose to evaluate the BGL applicability for SA calculation.

  6. Development of a %22solar patch%22 calculator to evaluate heliostat-field irradiance as a boundary condition in CFD models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalsa, Siri Sahib; Ho, Clifford Kuofei

    2010-04-01

    A rigorous computational fluid dynamics (CFD) approach to calculating temperature distributions, radiative and convective losses, and flow fields in a cavity receiver irradiated by a heliostat field is typically limited to the receiver domain alone for computational reasons. A CFD simulation cannot realistically yield a precise solution that includes the details within the vast domain of an entire heliostat field in addition to the detailed processes and features within a cavity receiver. Instead, the incoming field irradiance can be represented as a boundary condition on the receiver domain. This paper describes a program, the Solar Patch Calculator, written in Microsoftmore » Excel VBA to characterize multiple beams emanating from a 'solar patch' located at the aperture of a cavity receiver, in order to represent the incoming irradiance from any field of heliostats as a boundary condition on the receiver domain. This program accounts for cosine losses; receiver location; heliostat reflectivity, areas and locations; field location; time of day and day of year. This paper also describes the implementation of the boundary conditions calculated by this program into a Discrete Ordinates radiation model using Ansys{reg_sign} FLUENT (www.fluent.com), and compares the results to experimental data and to results generated by the code DELSOL.« less

  7. Development of a %22Solar Patch%22 calculator to evaluate heliostat-field irradiance as a boundary condition in CFD models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalsa, Siri Sahib S.; Ho, Clifford Kuofei

    2010-05-01

    A rigorous computational fluid dynamics (CFD) approach to calculating temperature distributions, radiative and convective losses, and flow fields in a cavity receiver irradiated by a heliostat field is typically limited to the receiver domain alone for computational reasons. A CFD simulation cannot realistically yield a precise solution that includes the details within the vast domain of an entire heliostat field in addition to the detailed processes and features within a cavity receiver. Instead, the incoming field irradiance can be represented as a boundary condition on the receiver domain. This paper describes a program, the Solar Patch Calculator, written in Microsoftmore » Excel VBA to characterize multiple beams emanating from a 'solar patch' located at the aperture of a cavity receiver, in order to represent the incoming irradiance from any field of heliostats as a boundary condition on the receiver domain. This program accounts for cosine losses; receiver location; heliostat reflectivity, areas and locations; field location; time of day and day of year. This paper also describes the implementation of the boundary conditions calculated by this program into a Discrete Ordinates radiation model using Ansys{reg_sign} FLUENT (www.fluent.com), and compares the results to experimental data and to results generated by the code DELSOL.« less

  8. Correlating Fast Fluence to dpa in Atypical Locations

    NASA Astrophysics Data System (ADS)

    Drury, Thomas H.

    2016-02-01

    Damage to a nuclear reactor's materials by high-energy neutrons causes changes in the ductility and fracture toughness of the materials. The reactor vessel and its associated piping's ability to withstand stress without brittle fracture are paramount to safety. Theoretically, the material damage is directly related to the displacements per atom (dpa) via the residual defects from induced displacements. However in practice, the material damage is based on a correlation to the high-energy (E > 1.0 MeV) neutron fluence. While the correlated approach is applicable when the material in question has experienced the same neutron spectrum as test specimens which were the basis of the correlation, this approach is not generically acceptable. Using Monte Carlo and discrete ordinates transport codes, the energy dependent neutron flux is determined throughout the reactor structures and the reactor vessel. Results from the models provide the dpa response in addition to the high-energy neutron flux. Ratios of dpa to fast fluence are calculated throughout the models. The comparisons show a constant ratio in the areas of historical concern and thus the validity of the correlated approach to these areas. In regions above and below the fuel however, the flux spectrum has changed significantly. The correlated relationship of material damage to fluence is not valid in these regions without adjustment. An adjustment mechanism is proposed.

  9. Advances in Engineering Software for Lift Transportation Systems

    NASA Astrophysics Data System (ADS)

    Kazakoff, Alexander Borisoff

    2012-03-01

    In this paper an attempt is performed at computer modelling of ropeway ski lift systems. The logic in these systems is based on a travel form between the two terminals, which operates with high capacity cabins, chairs, gondolas or draw-bars. Computer codes AUTOCAD, MATLAB and Compaq-Visual Fortran - version 6.6 are used in the computer modelling. The rope systems computer modelling is organized in two stages in this paper. The first stage is organization of the ground relief profile and a design of the lift system as a whole, according to the terrain profile and the climatic and atmospheric conditions. The ground profile is prepared by the geodesists and is presented in an AUTOCAD view. The next step is the design of the lift itself which is performed by programmes using the computer code MATLAB. The second stage of the computer modelling is performed after the optimization of the co-ordinates and the lift profile using the computer code MATLAB. Then the co-ordinates and the parameters are inserted into a program written in Compaq Visual Fortran - version 6.6., which calculates 171 lift parameters, organized in 42 tables. The objective of the work presented in this paper is an attempt at computer modelling of the design and parameters derivation of the rope way systems and their computer variation and optimization.

  10. Contrasting Five Different Theories of Letter Position Coding: Evidence from Orthographic Similarity Effects

    ERIC Educational Resources Information Center

    Davis, Colin J.; Bowers, Jeffrey S.

    2006-01-01

    Five theories of how letter position is coded are contrasted: position-specific slot-coding, Wickelcoding, open-bigram coding (discrete and continuous), and spatial coding. These theories make different predictions regarding the relative similarity of three different types of pairs of letter strings: substitution neighbors,…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  12. Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans Gougar

    2010-07-01

    ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less

  13. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  14. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  15. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  16. Coding for Single-Line Transmission

    NASA Technical Reports Server (NTRS)

    Madison, L. G.

    1983-01-01

    Digital transmission code combines data and clock signals into single waveform. MADCODE needs four standard integrated circuits in generator and converter plus five small discrete components. MADCODE allows simple coding and decoding for transmission of digital signals over single line.

  17. Goal Attainment Scaling as an Outcome Measure in Randomized Controlled Trials of Psychosocial Interventions in Autism

    ERIC Educational Resources Information Center

    Ruble, Lisa; McGrew, John H.; Toland, Michael D.

    2012-01-01

    Goal attainment scaling (GAS) holds promise as an idiographic approach for measuring outcomes of psychosocial interventions in community settings. GAS has been criticized for untested assumptions of scaling level (i.e., interval or ordinal), inter-individual equivalence and comparability, and reliability of coding across different behavioral…

  18. 75 FR 39960 - Alcoholic Beverage Control Ordinance, Salt River Pima-Maricopa Indian Community (SRPMIC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... beverages at certain restaurants within the community. DATES: Effective Date: This Code is effective as of... Initiative Vote of the People Regarding the Sale of Alcoholic Beverages at Certain Restaurants Within the... Premises, regardless of whether the sales of Alcoholic Beverages are made under a Restaurant License issued...

  19. Project Fever - Fostering Electric Vehicle Expansion in the Rockies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swalnick, Natalia

    2013-06-30

    Project FEVER (Fostering Electric Vehicle Expansion in the Rockies) is a part of the Clean Cities Community Readiness and Planning for Plug-in Electric Vehicles and Charging Infrastructure Funding Opportunity funded by the U.S. Department of Energy (DOE) for the state of Colorado. Tasks undertaken in this project include: Electric Vehicle Grid Impact Assessment; Assessment of Electrical Permitting and Inspection for EV/EVSE (electric vehicle/electric vehicle supply equipment); Assessment of Local Ordinances Pertaining to Installation of Publicly Available EVSE;Assessment of Building Codes for EVSE; EV Demand and Energy/Air Quality Impacts Assessment; State and Local Policy Assessment; EV Grid Impact Minimization Efforts; Unificationmore » and Streamlining of Electrical Permitting and Inspection for EV/EVSE; Development of BMP for Local EVSE Ordinances; Development of BMP for Building Codes Pertaining to EVSE; Development of Colorado-Specific Assessment for EV/EVSE Energy/Air Quality Impacts; Development of State and Local Policy Best Practices; Create Final EV/EVSE Readiness Plan; Develop Project Marketing and Communications Elements; Plan and Schedule In-person Education and Outreach Opportunities.« less

  20. An Efficient Variable Length Coding Scheme for an IID Source

    NASA Technical Reports Server (NTRS)

    Cheung, K. -M.

    1995-01-01

    A scheme is examined for using two alternating Huffman codes to encode a discrete independent and identically distributed source with a dominant symbol. This combined strategy, or alternating runlength Huffman (ARH) coding, was found to be more efficient than ordinary coding in certain circumstances.

  1. An S N Algorithm for Modern Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    2016-08-29

    LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.

  2. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    PubMed

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  3. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds

    PubMed Central

    Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds. PMID:27974884

  4. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  5. Flexible Environments for Grand-Challenge Simulation in Climate Science

    NASA Astrophysics Data System (ADS)

    Pierrehumbert, R.; Tobis, M.; Lin, J.; Dieterich, C.; Caballero, R.

    2004-12-01

    Current climate models are monolithic codes, generally in Fortran, aimed at high-performance simulation of the modern climate. Though they adequately serve their designated purpose, they present major barriers to application in other problems. Tailoring them to paleoclimate of planetary simulations, for instance, takes months of work. Theoretical studies, where one may want to remove selected processes or break feedback loops, are similarly hindered. Further, current climate models are of little value in education, since the implementation of textbook concepts and equations in the code is obscured by technical detail. The Climate Systems Center at the University of Chicago seeks to overcome these limitations by bringing modern object-oriented design into the business of climate modeling. Our ultimate goal is to produce an end-to-end modeling environment capable of configuring anything from a simple single-column radiative-convective model to a full 3-D coupled climate model using a uniform, flexible interface. Technically, the modeling environment is implemented as a Python-based software component toolkit: key number-crunching procedures are implemented as discrete, compiled-language components 'glued' together and co-ordinated by Python, combining the high performance of compiled languages and the flexibility and extensibility of Python. We are incrementally working towards this final objective following a series of distinct, complementary lines. We will present an overview of these activities, including PyOM, a Python-based finite-difference ocean model allowing run-time selection of different Arakawa grids and physical parameterizations; CliMT, an atmospheric modeling toolkit providing a library of 'legacy' radiative, convective and dynamical modules which can be knitted into dynamical models, and PyCCSM, a version of NCAR's Community Climate System Model in which the coupler and run-control architecture are re-implemented in Python, augmenting its flexibility and adaptability.

  6. Orestes Kinetics Model for the Electra KrF Laser

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R. H.; Myers, M. C.; Sethian, J. D.; Petrov, G.; Wolford, M.; Hegeler, F.

    2003-10-01

    Orestes is a first principles simulation code for the electron deposition, plasma chemistry, laser transport, and amplified spontaneous emission (ASE) in an e-beam pumped KrF laser. Orestes has been benchmarked against results from Nike at NRL and the Keio laser facility. The modeling tasks are to support ongoing oscillator experiments on the Electra laser ( 500 J), to predict performance of Electra as an amplifier, and to develop scaling relations for larger systems such as envisioned for an inertial fusion energy power plant. In Orestes the energy deposition of the primary beam electrons is assumed to be spatially uniform, but the excitation and ionization of the Ar/Kr/F2 target gas by the secondary electrons is determined from the energy distribution function as calculated by a Boltzmann code. The subsequent plasma kinetics of 23 species subject to over 100 reactions is followed with 1-D spatial resolution along the lasing axis. In addition, the vibrational relaxation among excited electronic states of the KrF molecule are included in the kinetics since lasing at 248 nm can occur from several vibrational lines of the B state. Transport of the lasing photons is solved by the method of characteristics. The time dependent ASE is calculated in 3-D using a ``local look-back'' scheme with discrete ordinates and includes specular reflection off the side walls and rear mirror. Gain narrowing is treated by multi-frequency transport of the ASE. Calculations for the gain, saturation intensity, extraction efficiency, and laser output from the Orestes model will be presented and compared with available data from Electra operated as an oscillator. Potential implications for the difference in optimal F2 concentration will be discussed along with the effects of window transmissivity at 248 nm.

  7. Numerical uncertainty in computational engineering and physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemez, Francois M

    2009-01-01

    Obtaining a solution that approximates ordinary or partial differential equations on a computational mesh or grid does not necessarily mean that the solution is accurate or even 'correct'. Unfortunately assessing the quality of discrete solutions by questioning the role played by spatial and temporal discretizations generally comes as a distant third to test-analysis comparison and model calibration. This publication is contributed to raise awareness of the fact that discrete solutions introduce numerical uncertainty. This uncertainty may, in some cases, overwhelm in complexity and magnitude other sources of uncertainty that include experimental variability, parametric uncertainty and modeling assumptions. The concepts ofmore » consistency, convergence and truncation error are overviewed to explain the articulation between the exact solution of continuous equations, the solution of modified equations and discrete solutions computed by a code. The current state-of-the-practice of code and solution verification activities is discussed. An example in the discipline of hydro-dynamics illustrates the significant effect that meshing can have on the quality of code predictions. A simple method is proposed to derive bounds of solution uncertainty in cases where the exact solution of the continuous equations, or its modified equations, is unknown. It is argued that numerical uncertainty originating from mesh discretization should always be quantified and accounted for in the overall uncertainty 'budget' that supports decision-making for applications in computational physics and engineering.« less

  8. Collaborative research and action to control the geographic placement of outdoor advertising of alcohol and tobacco products in Chicago.

    PubMed

    Hackbarth, D P; Schnopp-Wyatt, D; Katz, D; Williams, J; Silvestri, B; Pfleger, M

    2001-01-01

    Community activists in Chicago believed their neighborhoods were being targeted by alcohol and tobacco outdoor advertisers, despite the Outdoor Advertising Association of America's voluntary code of principles, which claims to restrict the placement of ads for age-restricted products and prevent billboard saturation of urban neighborhoods. A research and action plan resulted from a 10-year collaborative partnership among Loyola University Chicago, the American Lung Association of Metropolitan Chicago (ALAMC), and community activists from a predominately African American church, St. Sabina Parish. In 1997 Loyola University and ALAMC researchers conducted a cross-sectional prevalence survey of alcohol and tobacco outdoor advertising. Computer mapping was used to locate all 4,247 licensed billboards in Chicago that were within 500- and 1,000-foot radiuses of schools, parks, and playlots. A 50% sample of billboards was visually surveyed and coded for advertising content. The percentage of alcohol and tobacco billboards within the 500- and 1,000-foot zones ranged from 0% to 54%. African American and Hispanic neighborhoods were disproportionately targeted for outdoor advertising of alcohol and tobacco. Data were used to convince the Chicago City Council to pass one of the nation's toughest anti-alcohol and tobacco billboard ordinances, based on zoning rather than advertising content. The ordinance was challenged in court by advertisers. Recent Supreme Court rulings made enactment of local billboard ordinances problematic. Nevertheless, the research, which resulted in specific legislative action, demonstrated the importance of linkages among academic, practice, and grassroots community groups in working together to diminish one of the social causes of health disparities.

  9. General Purpose Fortran Program for Discrete-Ordinate-Method Radiative Transfer in Scattering and Emitting Layered Media: An Update of DISORT

    NASA Technical Reports Server (NTRS)

    Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)

    2000-01-01

    This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.

  10. Proximity Analysis and the Structure of Organization in Free Recall.

    ERIC Educational Resources Information Center

    Friendly, Michael L.

    A method for assessing the structure of organization was developed on the basis of the ordinal separation, or proximity, between pairs ot items in recall protocols over a series of trials. The proximity measure is based on the assumption, common to all indices of organization, that items which are coded together in subjective memory units will…

  11. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  12. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  13. Deterministic Modeling of the High Temperature Test Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is usedmore » in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the control rods were adjusted to maintain criticality, whereas in the model, the rod positions were fixed. In addition, this work includes a brief study of a cross section generation approach that seeks to decouple the domain in order to account for neighbor effects. This spectral interpenetration is a dominant effect in annular HTR physics. This analysis methodology should be further explored in order to reduce the error that is systematically propagated in the traditional generation of cross sections.« less

  14. What counts as a health service? Weight loss companies through the looking glass of New Zealand's Code of Patients' Rights.

    PubMed

    Neill, Megan J

    2013-03-01

    In New Zealand, the Code of Health and Disability Services Consumer's Rights is a key innovative piece of legislation for the protection of health and disability service users. It provides rights to consumers and imposes duties on the providers of such services, complemented by a cost-free statutory complaints process for the resolution of breakdowns in the relationship between the two. The Code has a potentially liberal application and is theoretically capable of applying to all manner of services through the generalised definitions of the Health and Disability Commissioner Act 1994 (NZ). As the facilitator of the Code, the Health and Disability Commissioner has a correspondingly wide discretion in determining whether to further investigate complaints of Code breaches. This article considers the extent to which the Code's apparent breadth of application could incorporate commercial weight loss companies as providers and the likelihood of the Commissioner using the discretion to investigate complaints against such companies.

  15. A note on the R sub 0-parameter for discrete memoryless channels

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.

    1980-01-01

    An explicit class of discrete memoryless channels (q-ary erasure channels) is exhibited. Practical and explicit coded systems of rate R with R/R sub o as large as desired can be designed for this class.

  16. Modulation and coding for a compatible Discrete Address Beacon System.

    DOT National Transportation Integrated Search

    1972-02-01

    One of several possible candidate configurations for the Discrete Address System is described. The configuration presented is compatible with the Air Traffic Control Radar Beacon System, and it provides for gradual transition from one system to the o...

  17. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  18. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  19. High Productivity Computing Systems Analysis and Performance

    DTIC Science & Technology

    2005-07-01

    cubic grid Discrete Math Global Updates per second (GUP/S) RandomAccess Paper & Pencil Contact Bob Lucas (ISI) Multiple Precision none...can be found at the web site. One of the HPCchallenge codes, RandomAccess, is derived from the HPCS discrete math benchmarks that we released, and...Kernels Discrete Math … Graph Analysis … Linear Solvers … Signal Processi ng Execution Bounds Execution Indicators 6 Scalable Compact

  20. Fire Suppression M and S Validation (Status and Challenges), Systems Fire Protection Information Exchange

    DTIC Science & Technology

    2015-10-14

    rate Kinetics •14 Species & 12 reactionsCombustion Model •Participating Media Discrete Ordinate Method •WSG model for CO2, H2O and SootRadiation Model...Inhibition of JP-8 Combustion Physical Acting Agents • Dilute heat • Dilute reactants Ex: water, nitrogen Chemical Acting Agents • Reduce flame...Release; distribution is unlimited 5 Overview of Reduced Kinetics Scheme for FM200 • R1: JP-8 + O2 => CO + CO2 + H2O • R2: CO + O2 <=> CO2 • R3: HFP + JP-8

  1. The LBM program at the EPFL/LOTUS Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    File, J.; Jassby, D.L.; Tsang, F.Y.

    1986-11-01

    An experimental program of neutron transport studies of the Lithium Blanket Module (LBM) is being carried out with the LOTUS point-neutron source facility at Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. Preliminary experiments use passive neutron dosimetry within the fuel rods in the LBM central zone, as well as, both thermal extraction and dissolution methods to assay tritium bred in Li/sub 2/O diagnostic wafers and LBM pellets. These measurements are being compared and reconciled with each other and with the predictions of two-dimensional discrete-ordinates and continuous-energy Monte-Carlo analyses of the Lotus/LBM system.

  2. Community-level policy responses to state marijuana legalization in Washington State.

    PubMed

    Dilley, Julia A; Hitchcock, Laura; McGroder, Nancy; Greto, Lindsey A; Richardson, Susan M

    2017-04-01

    Washington State (WA) legalized a recreational marijuana market - including growing, processing and retail sales - through voter initiative 502 in November 2012. Legalized recreational marijuana retail sales began in July 2014. In response to state legalization of recreational marijuana, some cities and counties within the state have passed local ordinances that either further regulated marijuana markets, or banned them completely. The purpose of this study is to describe local-level marijuana regulations on recreational retail sales within the context of a state that had legalized a recreational marijuana market. Marijuana-related ordinances were collected from all 142 cities in the state with more than 3000 residents and from all 39 counties. Policies that were in place as of June 30, 2016 - two years after the state's recreational market opening - to regulate recreational marijuana retail sales within communities were systematically coded. A total of 125 cities and 30 counties had passed local ordinances to address recreational marijuana retail sales. Multiple communities implemented retail market bans, including some temporary bans (moratoria) while studying whether to pursue other policy options. As of June 30, 2016, 30% of the state population lived in places that had temporarily or permanently banned retail sales. Communities most frequently enacted zoning policies explicitly regulating where marijuana businesses could be established. Other policies included in ordinances placed limits on business hours and distance requirements (buffers) between marijuana businesses and youth-related land use types or other sensitive areas. State legalization does not necessarily result in uniform community environments that regulate recreational marijuana markets. Local ordinances vary among communities within Washington following statewide legalization. Further study is needed to describe how such local policies affect variation in public health and social outcomes. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Community-level policy responses to state marijuana legalization in Washington State

    PubMed Central

    Dilley, Julia A.; Hitchcock, Laura; McGroder, Nancy; Greto, Lindsey A.; Richardson, Susan M.

    2017-01-01

    Background Washington State (WA) legalized a recreational marijuana market -- including growing, processing and retail sales -- through voter initiative 502 in November 2012. Legalized recreational marijuana retail sales began in July 2014. In response to state legalization of recreational marijuana, some cities and counties within the state have passed local ordinances that either further regulated marijuana markets, or banned them completely. The purpose of this study is to describe local-level marijuana regulations on recreational retail sales within the context of a state that had legalized a recreational marijuana market. Methods Marijuana-related ordinances were collected from all 142 cities in the state with more than 3,000 residents and from all 39 counties. Policies that were in place as of June 30, 2016 - two years after the state’s recreational market opening - to regulate recreational marijuana retail sales within communities were systematically coded. Results A total of 125 cities and 30 counties had passed local ordinances to address recreational marijuana retail sales. Multiple communities implemented retail market bans, including some temporary bans (moratoria) while studying whether to pursue other policy options. As of June 30, 2016, 30% of the state population lived in places that had temporarily or permanently banned retail sales. Communities most frequently enacted zoning policies explicitly regulating where marijuana businesses could be established. Other policies included in ordinances placed limits on business hours and distance requirements (buffers) between marijuana businesses and youth-related land use types or other sensitive areas. Conclusions State legalization does not necessarily result in uniform community environments that regulate recreational marijuana markets. Local ordinances vary among communities within Washington following statewide legalization. Further study is needed to describe how such local policies affect variation in public health and social outcomes. PMID:28365192

  4. Viewing hybrid systems as products of control systems and automata

    NASA Technical Reports Server (NTRS)

    Grossman, R. L.; Larson, R. G.

    1992-01-01

    The purpose of this note is to show how hybrid systems may be modeled as products of nonlinear control systems and finite state automata. By a hybrid system, we mean a network of consisting of continuous, nonlinear control system connected to discrete, finite state automata. Our point of view is that the automata switches between the control systems, and that this switching is a function of the discrete input symbols or letters that it receives. We show how a nonlinear control system may be viewed as a pair consisting of a bialgebra of operators coding the dynamics, and an algebra of observations coding the state space. We also show that a finite automata has a similar representation. A hybrid system is then modeled by taking suitable products of the bialgebras coding the dynamics and the observation algebras coding the state spaces.

  5. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  6. 76 FR 37034 - Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-24

    ... Certain Employee Remuneration in Excess of $1,000,000 Under Internal Revenue Code Section 162(m) AGENCY... remuneration in excess of $1,000,000 under the Internal Revenue Code (Code). The proposed regulations clarify... stock options, it is intended that the directors may retain discretion as to the exact number of options...

  7. Comparison of alternate scoring of variables on the performance of the frailty index

    PubMed Central

    2014-01-01

    Background The frailty index (FI) is used to measure the health status of ageing individuals. An FI is constructed as the proportion of deficits present in an individual out of the total number of age-related health variables considered. The purpose of this study was to systematically assess whether dichotomizing deficits included in an FI affects the information value of the whole index. Methods Secondary analysis of three population-based longitudinal studies of community dwelling individuals: Nova Scotia Health Survey (NSHS, n = 3227 aged 18+), Survey of Health, Ageing and Retirement in Europe (SHARE, n = 37546 aged 50+), and Yale Precipitating Events Project (Yale-PEP, n = 754 aged 70+). For each dataset, we constructed two FIs from baseline data using the deficit accumulation approach. In each dataset, both FIs included the same variables (23 in NSHS, 70 in SHARE, 33 in Yale-PEP). One FI was constructed with only dichotomous values (marking presence or absence of a deficit); in the other FI, as many variables as possible were coded as ordinal (graded severity of a deficit). Participants in each study were followed for different durations (NSHS: 10 years, SHARE: 5 years, Yale PEP: 12 years). Results Within each dataset, the difference in mean scores between the ordinal and dichotomous-only FIs ranged from 0 to 1.5 deficits. Their ability to predict mortality was identical; their absolute difference in area under the ROC curve ranged from 0.00 to 0.02, and their absolute difference between Cox Hazard Ratios ranged from 0.001 to 0.009. Conclusions Analyses from three diverse datasets suggest that variables included in an FI can be coded either as dichotomous or ordinal, with negligible impact on the performance of the index in predicting mortality. PMID:24559204

  8. Peridynamics with LAMMPS : a user guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.; Silling, Stewart Andrew; Plimpton, Steven James

    2008-01-01

    Peridynamics is a nonlocal formulation of continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamic model. This document details the implementation of a discrete peridynamic model within the LAMMPS molecular dynamic code. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized, and overviews the LAMMPS implementation. A nontrivial example problem is also included.

  9. Multi-Aperture Digital Coherent Combining for Free-Space Optical Communication Receivers

    DTIC Science & Technology

    2016-04-21

    Distribution A: Public Release; unlimited distribution 2016 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.2025) Discrete ...Coherent combining algorithm Multi-aperture coherent combining enables using many discrete apertures together to create a large effective aperture. A

  10. The Clemson University, University Research Initiative Program in Discrete Mathematics and Computational Analysis

    DTIC Science & Technology

    1990-03-01

    Assmus, E. F., and J. D. Key, "Affine and projective planes", to appear in Discrete Math (Special Coding Theory Issue). 5. Assumus, E. F. and J. D...S. Locke, ’The subchromatic number of a graph", Discrete Math . 74 (1989)33-49. 24. Hedetniemi, S. T., and T. V. Wimer, "K-terminal recursive families...34Designs and geometries with Cayley", submitted to Journal of Symbolic Computation. 34. Key, J. D., "Regular sets in geometries", Annals of Discrete Math . 37

  11. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  12. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; Chanzy, Quentin; Coon, Ethan; Collier, Nathaniel; Costard, François; Ferry, Michel; Frampton, Andrew; Frederick, Jennifer; Gonçalvès, Julio; Holmén, Johann; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Mouche, Emmanuel; Orgogozo, Laurent; Pannetier, Romain; Rivière, Agnès; Roux, Nicolas; Rühaak, Wolfram; Scheidegger, Johanna; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik; Voss, Clifford

    2018-04-01

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. This issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatial and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.

  13. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  14. An efficient code for the simulation of nonhydrostatic stratified flow over obstacles

    NASA Technical Reports Server (NTRS)

    Pihos, G. G.; Wurtele, M. G.

    1981-01-01

    The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.

  15. Development of high-fidelity multiphysics system for light water reactor analysis

    NASA Astrophysics Data System (ADS)

    Magedanz, Jeffrey W.

    There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)

  16. DSSPcont: continuous secondary structure assignments for proteins

    PubMed Central

    Carter, Phil; Andersen, Claus A. F.; Rost, Burkhard

    2003-01-01

    The DSSP program automatically assigns the secondary structure for each residue from the three-dimensional co-ordinates of a protein structure to one of eight states. However, discrete assignments are incomplete in that they cannot capture the continuum of thermal fluctuations. Therefore, DSSPcont (http://cubic.bioc.columbia.edu/services/DSSPcont) introduces a continuous assignment of secondary structure that replaces ‘static’ by ‘dynamic’ states. Technically, the continuum results from calculating weighted averages over 10 discrete DSSP assignments with different hydrogen bond thresholds. A DSSPcont assignment for a particular residue is a percentage likelihood of eight secondary structure states, derived from a weighted average of the ten DSSP assignments. The continuous assignments have two important features: (i) they reflect the structural variations due to thermal fluctuations as detected by NMR spectroscopy; and (ii) they reproduce the structural variation between many NMR models from one single model. Therefore, functionally important variation can be extracted from a single X-ray structure using the continuous assignment procedure. PMID:12824310

  17. Numerical Computation of Flame Spread over a Thin Solid in Forced Concurrent Flow with Gas-phase Radiation

    NASA Technical Reports Server (NTRS)

    Jiang, Ching-Biau; T'ien, James S.

    1994-01-01

    Excerpts from a paper describing the numerical examination of concurrent-flow flame spread over a thin solid in purely forced flow with gas-phase radiation are presented. The computational model solves the two-dimensional, elliptic, steady, and laminar conservation equations for mass, momentum, energy, and chemical species. Gas-phase combustion is modeled via a one-step, second order finite rate Arrhenius reaction. Gas-phase radiation considering gray non-scattering medium is solved by a S-N discrete ordinates method. A simplified solid phase treatment assumes a zeroth order pyrolysis relation and includes radiative interaction between the surface and the gas phase.

  18. Linear Characteristic Spatial Quadrature for Discrete Ordinates Neutral Particle Transport on Arbitrary Triangles

    DTIC Science & Technology

    1993-06-01

    1•) + ) •,(v)(•,L) = ()(Q)+ sEXT (F). (4) The scalar flux, 0, is related to the angular flux, W, by (F)= f (dQ Vh) (5) and the particle current, J...J," v,p’) u +at(U, v) w(u, U, p’)= as(u, v) O(u, v) + SEXT (uv)] (92) 0 Ul,(V) I Assuming the area of the triangle is sufficiently small that cross...M + SEXT () (98) Wvn and WoUT are angular flux averages along the input and output edges, respectively, and are defined by WD Iv = f- ds. V(s.v) (99

  19. Heat Transfer Modelling of Glass Media within TPV Systems

    NASA Astrophysics Data System (ADS)

    Bauer, Thomas; Forbes, Ian; Penlington, Roger; Pearsall, Nicola

    2004-11-01

    Understanding and optimisation of heat transfer, and in particular radiative heat transfer in terms of spectral, angular and spatial radiation distributions is important to achieve high system efficiencies and high electrical power densities for thermophtovoltaics (TPV). This work reviews heat transfer models and uses the Discrete Ordinates method. Firstly one-dimensional heat transfer in fused silica (quartz glass) shields was examined for the common arrangement, radiator-air-glass-air-PV cell. It has been concluded that an alternative arrangement radiator-glass-air-PV cell with increased thickness of fused silica should have advantages in terms of improved transmission of convertible radiation and enhanced suppression of non-convertible radiation.

  20. Measurement and modeling of advanced coal conversion processes, Volume II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G.

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  1. Institutional Controls and Educational Research.

    ERIC Educational Resources Information Center

    Homan, Roger

    1990-01-01

    Recognizing tendencies toward contract research and possible consequences, advocates creating a conduct code to regulate educational research and protect its integrity. Reports survey responses from 48 British institutions, showing no systematic code. States confidence in supervisory discretion currently guides research. Proposes a specific code…

  2. Evaluation of new techniques for the calculation of internal recirculating flows

    NASA Technical Reports Server (NTRS)

    Van Doormaal, J. P.; Turan, A.; Raithby, G. D.

    1987-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.

  3. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    NASA Astrophysics Data System (ADS)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  4. Continuous-variable quantum network coding for coherent states

    NASA Astrophysics Data System (ADS)

    Shang, Tao; Li, Ke; Liu, Jian-wei

    2017-04-01

    As far as the spectral characteristic of quantum information is concerned, the existing quantum network coding schemes can be looked on as the discrete-variable quantum network coding schemes. Considering the practical advantage of continuous variables, in this paper, we explore two feasible continuous-variable quantum network coding (CVQNC) schemes. Basic operations and CVQNC schemes are both provided. The first scheme is based on Gaussian cloning and ADD/SUB operators and can transmit two coherent states across with a fidelity of 1/2, while the second scheme utilizes continuous-variable quantum teleportation and can transmit two coherent states perfectly. By encoding classical information on quantum states, quantum network coding schemes can be utilized to transmit classical information. Scheme analysis shows that compared with the discrete-variable paradigms, the proposed CVQNC schemes provide better network throughput from the viewpoint of classical information transmission. By modulating the amplitude and phase quadratures of coherent states with classical characters, the first scheme and the second scheme can transmit 4{log _2}N and 2{log _2}N bits of information by a single network use, respectively.

  5. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE PAGES

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor; ...

    2018-02-26

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  6. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  7. Groundwater flow and heat transport for systems undergoing freeze-thaw: Intercomparison of numerical simulators for 2D test cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grenier, Christophe; Anbergen, Hauke; Bense, Victor

    In high-elevation, boreal and arctic regions, hydrological processes and associated water bodies can be strongly influenced by the distribution of permafrost. Recent field and modeling studies indicate that a fully-coupled multidimensional thermo-hydraulic approach is required to accurately model the evolution of these permafrost-impacted landscapes and groundwater systems. However, the relatively new and complex numerical codes being developed for coupled non-linear freeze-thaw systems require verification. Here in this paper, this issue is addressed by means of an intercomparison of thirteen numerical codes for two-dimensional test cases with several performance metrics (PMs). These codes comprise a wide range of numerical approaches, spatialmore » and temporal discretization strategies, and computational efficiencies. Results suggest that the codes provide robust results for the test cases considered and that minor discrepancies are explained by computational precision. However, larger discrepancies are observed for some PMs resulting from differences in the governing equations, discretization issues, or in the freezing curve used by some codes.« less

  8. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  9. The inverse of winnowing: a FORTRAN subroutine and discussion of unwinnowing discrete data

    USGS Publications Warehouse

    Bracken, Robert E.

    2004-01-01

    This report describes an unwinnowing algorithm that utilizes a discrete Fourier transform, and a resulting Fortran subroutine that winnows or unwinnows a 1-dimensional stream of discrete data; the source code is included. The unwinnowing algorithm effectively increases (by integral factors) the number of available data points while maintaining the original frequency spectrum of a data stream. This has utility when an increased data density is required together with an availability of higher order derivatives that honor the original data.

  10. Assessment of polarization effect on aerosol retrievals from MODIS

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2010-12-01

    Light polarization affects the total intensity of scattered radiation. In this work, we compare aerosol retrievals performed by code MAIAC [1] with and without taking polarization into account. The MAIAC retrievals are based on the look-up tables (LUT). For this work, MAIAC was run using two different LUTs, the first one generated using the scalar code SHARM [2], and the second one generated with the vector code Modified Vector Discrete Ordinates Method (MVDOM). MVDOM is a new code suitable for computations with highly anisotropic phase functions, including cirrus clouds and snow [3]. To this end, the solution of the vector radiative transfer equation (VRTE) is represented as a sum of anisotropic and regular components. The anisotropic component is evaluated in the Small Angle Modification of the Spherical Harmonics Method (MSH) [4]. The MSH is formulated in the frame of reference of the solar beam where z-axis lies along the solar beam direction. In this case, the MSH solution for anisotropic part is nearly symmetric in azimuth, and is computed analytically. In scalar case, this solution coincides with the Goudsmit-Saunderson small-angle approximation [5]. To correct for an analytical separation of the anisotropic part of the signal, the transfer equation for the regular part contains a correction source function term [6]. Several examples of polarization impact on aerosol retrievals over different surface types will be presented. 1. Lyapustin A., Wang Y., Laszlo I., Kahn R., Korkin S., Remer L., Levy R., and Reid J. S. Multi-Angle Implementation of Atmospheric Correction (MAIAC): Part 2. Aerosol Algorithm. J. Geophys. Res., submitted (2010). 2. Lyapustin A., Muldashev T., Wang Y. Code SHARM: fast and accurate radiative transfer over spatially variable anisotropic surfaces. In: Light Scattering Reviews 5. Chichester: Springer, 205 - 247 (2010). 3. Budak, V.P., Korkin S.V. On the solution of a vectorial radiative transfer equation in an arbitrary three-dimensional turbid medium with anisotropic scattering. JQSRT, 109, 220-234 (2008). 4. Budak V.P., Sarmin S.E. Solution of radiative transfer equation by the method of spherical harmonics in the small angle modification. Atmospheric and Oceanic Optics, 3, 898-903 (1990). 5. Goudsmit S., Saunderson J.L. Multiple scattering of electrons. Phys. Rev., 57, 24-29 (1940). 6. Budak V.P, Klyuykov D.A., Korkin S.V. Convergence acceleration of radiative transfer equation solution at strongly anisotropic scattering. In: Light Scattering Reviews 5. Chichester: Springer, 147 - 204 (2010).

  11. Solar Proton Transport Within an ICRU Sphere Surrounded by a Complex Shield: Ray-trace Geometry

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Wilson, John W.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.

    2015-01-01

    A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z is less than or equal to 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency.

  12. Solar proton exposure of an ICRU sphere within a complex structure part II: Ray-trace geometry.

    PubMed

    Slaba, Tony C; Wilson, John W; Badavi, Francis F; Reddell, Brandon D; Bahadori, Amir A

    2016-06-01

    A computationally efficient 3DHZETRN code with enhanced neutron and light ion (Z ≤ 2) propagation was recently developed for complex, inhomogeneous shield geometry described by combinatorial objects. Comparisons were made between 3DHZETRN results and Monte Carlo (MC) simulations at locations within the combinatorial geometry, and it was shown that 3DHZETRN agrees with the MC codes to the extent they agree with each other. In the present report, the 3DHZETRN code is extended to enable analysis in ray-trace geometry. This latest extension enables the code to be used within current engineering design practices utilizing fully detailed vehicle and habitat geometries. Through convergence testing, it is shown that fidelity in an actual shield geometry can be maintained in the discrete ray-trace description by systematically increasing the number of discrete rays used. It is also shown that this fidelity is carried into transport procedures and resulting exposure quantities without sacrificing computational efficiency. Published by Elsevier Ltd.

  13. Modification of the SAS4A Safety Analysis Code for Integration with the ADAPT Discrete Dynamic Event Tree Framework.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jankovsky, Zachary Kyle; Denman, Matthew R.

    It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less

  14. Progress with the COGENT Edge Kinetic Code: Implementing the Fokker-Plank Collision Operator

    DOE PAGES

    Dorf, M. A.; Cohen, R. H.; Dorr, M.; ...

    2014-06-20

    Here, COGENT is a continuum gyrokinetic code for edge plasma simulations being developed by the Edge Simulation Laboratory collaboration. The code is distinguished by application of a fourth-order finite-volume (conservative) discretization, and mapped multiblock grid technology to handle the geometric complexity of the tokamak edge. The distribution function F is discretized in v∥ – μ (parallel velocity – magnetic moment) velocity coordinates, and the code presently solves an axisymmetric full-f gyro-kinetic equation coupled to the long-wavelength limit of the gyro-Poisson equation. COGENT capabilities are extended by implementing the fully nonlinear Fokker-Plank operator to model Coulomb collisions in magnetized edge plasmas.more » The corresponding Rosenbluth potentials are computed by making use of a finite-difference scheme and multipole-expansion boundary conditions. Details of the numerical algorithms and results of the initial verification studies are discussed. (© 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)« less

  15. Improving the efficiency of quantum hash function by dense coding of coin operators in discrete-time quantum walk

    NASA Astrophysics Data System (ADS)

    Yang, YuGuang; Zhang, YuChen; Xu, Gang; Chen, XiuBo; Zhou, Yi-Hua; Shi, WeiMin

    2018-03-01

    Li et al. first proposed a quantum hash function (QHF) in a quantum-walk architecture. In their scheme, two two-particle interactions, i.e., I interaction and π-phase interaction are introduced and the choice of I or π-phase interactions at each iteration depends on a message bit. In this paper, we propose an efficient QHF by dense coding of coin operators in discrete-time quantum walk. Compared with existing QHFs, our protocol has the following advantages: the efficiency of the QHF can be doubled and even more; only one particle is enough and two-particle interactions are unnecessary so that quantum resources are saved. It is a clue to apply the dense coding technique to quantum cryptographic protocols, especially to the applications with restricted quantum resources.

  16. Synchronization Control for a Class of Discrete-Time Dynamical Networks With Packet Dropouts: A Coding-Decoding-Based Approach.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2017-09-06

    The synchronization control problem is investigated for a class of discrete-time dynamical networks with packet dropouts via a coding-decoding-based approach. The data is transmitted through digital communication channels and only the sequence of finite coded signals is sent to the controller. A series of mutually independent Bernoulli distributed random variables is utilized to model the packet dropout phenomenon occurring in the transmissions of coded signals. The purpose of the addressed synchronization control problem is to design a suitable coding-decoding procedure for each node, based on which an efficient decoder-based control protocol is developed to guarantee that the closed-loop network achieves the desired synchronization performance. By applying a modified uniform quantization approach and the Kronecker product technique, criteria for ensuring the detectability of the dynamical network are established by means of the size of the coding alphabet, the coding period and the probability information of packet dropouts. Subsequently, by resorting to the input-to-state stability theory, the desired controller parameter is obtained in terms of the solutions to a certain set of inequality constraints which can be solved effectively via available software packages. Finally, two simulation examples are provided to demonstrate the effectiveness of the obtained results.

  17. A new approach for modeling composite materials

    NASA Astrophysics Data System (ADS)

    Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.

    2013-03-01

    The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.

  18. Efficient simulation of pitch angle collisions in a 2+2-D Eulerian Vlasov code

    NASA Astrophysics Data System (ADS)

    Banks, Jeff; Berger, R.; Brunner, S.; Tran, T.

    2014-10-01

    Here we discuss pitch angle scattering collisions in the context of the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The collision operator is discretized using 4th order accurate conservative finite-differencing. The treatment of the Vlasov operator in phase-space uses an approach based on a minimally diffuse, fourth-order-accurate discretization (Banks and Hittinger, IEEE T. Plasma Sci. 39, 2198). The overall scheme is therefore discretely conservative and controls unphysical oscillations. Some details of the numerical scheme will be presented, and the implementation on modern highly concurrent parallel computers will be discussed. We will present results of collisional effects on linear and non-linear Landau damping of electron plasma waves (EPWs). In addition we will present initial results showing the effect of collisions on the evolution of EPWs in two space dimensions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LDRD program at LLNL under project tracking code 12-ERD-061.

  19. Numerical developments for short-pulsed Near Infra-Red laser spectroscopy. Part I: direct treatment

    NASA Astrophysics Data System (ADS)

    Boulanger, Joan; Charette, André

    2005-03-01

    This two part study is devoted to the numerical treatment of short-pulsed laser near infra-red spectroscopy. The overall goal is to address the possibility of numerical inverse treatment based on a recently developed direct model to solve the transient radiative transfer equation. This model has been constructed in order to incorporate the last improvements in short-pulsed laser interaction with semi-transparent media and combine a discrete ordinates computing of the implicit source term appearing in the radiative transfer equation with an explicit treatment of the transport of the light intensity using advection schemes, a method encountered in reactive flow dynamics. The incident collimated beam is analytically solved through Bouger Beer Lambert extinction law. In this first part, the direct model is extended to fully non-homogeneous materials and tested with two different spatial schemes in order to be adapted to the inversion methods presented in the following second part. As a first point, fundamental methods and schemes used in the direct model are presented. Then, tests are conducted by comparison with numerical simulations given as references. In a third and last part, multi-dimensional extensions of the code are provided. This allows presentation of numerical results of short pulses propagation in 1, 2 and 3D homogeneous and non-homogeneous materials given some parametrical studies on medium properties and pulse shape. For comparison, an integral method adapted to non-homogeneous media irradiated by a pulsed laser beam is also developed for the 3D case.

  20. Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment

    NASA Astrophysics Data System (ADS)

    Barnett, D. A., Jr.

    1991-02-01

    An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.

  1. SU-F-T-111: Investigation of the Attila Deterministic Solver as a Supplement to Monte Carlo for Calculating Out-Of-Field Radiotherapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Lee, C; Failla, G

    Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing averagemore » organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective epidemiological investigations.« less

  2. Physics Verification Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  3. Ozone retrievals from MAGEAQ GEO TIR+VIS for air quality

    NASA Astrophysics Data System (ADS)

    Quesada-Ruiz, Samuel; Attié, Jean-Luc; Lahoz, William A.; Abida, Rachid; El-Amraoui, Laaziz; Ricaud, Philippe; Zbinden, Regina; Spurr, Robert; da Silva, Arlindo M.

    2016-04-01

    Nowadays, air quality monitoring is based on the use of ground-based stations (GBS) or satellite measurements. GBS provide accurate measurements of pollutant concentrations, especially in the planetary boundary layer (PBL), but usually the spatial coverage is sparse. Polar-orbiting satellites provide good spatial resolution but low temporal coverage -this is insufficient for tracking pollutants exhibiting a diurnal cycle (Lahoz et al., 2012). However, pollutant concentrations can be measured by instruments placed on board a geostationary satellite, which can provide sufficiently high temporal and spatial resolutions (e.g. Hache et al., 2014). In this work, we investigate the potentiality of a possible future geostationary instrument, MAGEAQ (Monitoring the Atmosphere from Geostationary orbit for European Air Quality), for retrieving ozone measurements over Europe. In particular, MAGEAQ can provide 1-hour temporal sampling at 10x10km pixel resolution for measurements in both visible (VIS) and thermal infrared (TIR) bands -thus, we will be able to measure during the day and at night. MAGEAQ synthetic radiance observations are obtained through radiative transfer (RT) simulations using the VLIDORT discrete ordinate RT model (Spurr, 2006) based on output from the GEOS-5 Nature Run (Gelaro et al., 2015) providing optical information, plus a suitable instrument model. Ozone is retrieved from these synthetic measurements using the optimal estimation inversion scheme of Levenberg-Marquardt. Finally, we examine an application of the air quality concept based on these ozone retrievals during the heatwave event of July 2006 over Europe. REFERENCES Gelaro, R., Putman, W. M., Pawson, S., Draper, C., Molod, A., Norris, P. M., Ott, L., Privé, N., Reale, O., Achuthavarier, D., Bosilovich, M., Buchard, V., Chao, W., Coy, L., Cullather, R., da Silva, A., Darmenov, A., Errico, R. M., Fuentes, M., Kim, M-J., Koster, R., McCarty, W., Nattala, J., Partyka, G., Schubert, S., Vernieres, G., Vikhliaev, Y., and Wargan, K.. Evaluation of the 7-km GEOS-5 Nature Run. NASA/TM-2014-104606, Vol. 36., 2015. Hache, E., Attié, J.L., Tourneur, C., Ricaud, P., Coret, L., Lahoz, W.A., El Amraoui, L., Josse, B., Hamer, P., Warner, J., Liu, X., Chance, K., Höpfner, M., Spurr, R., Natraj, V., Kulawik, S., Eldering, A. and Orphal, J.. The added value of a visible channel to a geostationary thermal infrared instrument to monitor ozone for air quality. Atmos. Meas. Tech., 7, 2185-2201, 2014. Lahoz, W. A., Peuch, V. H., Orphal, J., Attie, J.L., Chance, K., Liu, X., Edwards, D., Elbern, H., Flaud, J. M., Claeyman, M., and El Amraoui, L.. Monitoring Air Quality from Space: The Case for the Geostationay Platform. Bulletin of the American Meteorological Society, 93, 221-233, 2012. Spurr, R. J. D.. VLIDORT: A Linearized Pseudo-Spherical Vector Discrete Ordinate Radiative Transfer Code for Forward Model and Retrieval Studies in Multilayer Multiple Scattering Media. Journal of Quantitative Spectroscopy & Radiative Transfer, 102, 316-342, 2006.

  4. Exponential Characteristic Spatial Quadrature for Discrete Ordinates Neutral Particle Transport in Slab Geometry

    DTIC Science & Technology

    1992-03-01

    left bdy = 0 vacuum current incident at left boundary = I type of current incident at left bdy = 0 isotropic surface Src region# cR SigmaR SourceR nc...0 type of current incident at left bdy = 0 isotropic surface Src region# cR SigmaR SourceR nc Right Bdy 1 0.5000 .3OD+00 0.0000D+00 256. 16.0000 2... SigmaR SourceR nc Right Bdy 1 0.1000 1.0000D+00 0.0000D+00 256. 16.0000 2 0.9500 1.0(OOD+00 1.0000D+00 256. 32.0000 type of right bdy = 0 vacuum current

  5. Modelling crystal plasticity by 3D dislocation dynamics and the finite element method: The Discrete-Continuous Model revisited

    NASA Astrophysics Data System (ADS)

    Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.

    2014-02-01

    A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.

  6. Fast discrete cosine transform structure suitable for implementation with integer computation

    NASA Astrophysics Data System (ADS)

    Jeong, Yeonsik; Lee, Imgeun

    2000-10-01

    The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.

  7. Characteristic correlation study of UV disinfection performance for ballast water treatment

    NASA Astrophysics Data System (ADS)

    Ba, Te; Li, Hongying; Osman, Hafiiz; Kang, Chang-Wei

    2016-11-01

    Characteristic correlation between ultraviolet disinfection performance and operating parameters, including ultraviolet transmittance (UVT), lamp power and water flow rate, was studied by numerical and experimental methods. A three-stage model was developed to simulate the fluid flow, UV radiation and the trajectories of microorganisms. Navier-Stokes equation with k-epsilon turbulence was solved to model the fluid flow, while discrete ordinates (DO) radiation model and discrete phase model (DPM) were used to introduce UV radiation and microorganisms trajectories into the model, respectively. The UV dose statistical distribution for the microorganisms was found to move to higher value with the increase of UVT and lamp power, but moves to lower value when the water flow rate increases. Further investigation shows that the fluence rate increases exponentially with UVT but linearly with the lamp power. The average and minimum resident time decreases linearly with the water flow rate while the maximum resident time decrease rapidly in a certain range. The current study can be used as a digital design and performance evaluation tool of the UV reactor for ballast water treatment.

  8. The compulsory psychiatric regime in Hong Kong: Constitutional and ethical perspectives.

    PubMed

    Cheung, Daisy

    This article examines the compulsory psychiatric regime in Hong Kong. Under section 36 of the Mental Health Ordinance, which authorises long-term detention of psychiatric patients, a District Judge is required to countersign the form filled out by the registered medical practitioners in order for the detention to be valid. Case law, however, has shown that the role of the District Judge is merely administrative. This article suggests that, as it currently stands, the compulsory psychiatric regime in Hong Kong is unconstitutional because it fails the proportionality test. In light of this conclusion, the author proposes two solutions to deal with the issue, by common law or by legislative reform. The former would see an exercise of discretion by the courts read into section 36, while the latter would involve piecemeal reform of the relevant provisions to give the courts an explicit discretion to consider substantive issues when reviewing compulsory detention applications. The author argues that these solutions would introduce effective judicial supervision into the compulsory psychiatric regime and safeguard against abuse of process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Application of Jacobian-free Newton–Krylov method in implicitly solving two-fluid six-equation two-phase flow problems: Implementation, validation and benchmark

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-03-09

    This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less

  10. Broadband transmission-type coding metamaterial for wavefront manipulation for airborne sound

    NASA Astrophysics Data System (ADS)

    Li, Kun; Liang, Bin; Yang, Jing; Yang, Jun; Cheng, Jian-chun

    2018-07-01

    The recent advent of coding metamaterials, as a new class of acoustic metamaterials, substantially reduces the complexity in the design and fabrication of acoustic functional devices capable of manipulating sound waves in exotic manners by arranging coding elements with discrete phase states in specific sequences. It is therefore intriguing, both physically and practically, to pursue a mechanism for realizing broadband acoustic coding metamaterials that control transmitted waves with a fine resolution of the phase profile. Here, we propose the design of a transmission-type acoustic coding device and demonstrate its metamaterial-based implementation. The mechanism is that, instead of relying on resonant coding elements that are necessarily narrow-band, we build weak-resonant coding elements with a helical-like metamaterial with a continuously varying pitch that effectively expands the working bandwidth while maintaining the sub-wavelength resolution of the phase profile that is vital for the production of complicated wave fields. The effectiveness of our proposed scheme is numerically verified via the demonstration of three distinctive examples of acoustic focusing, anomalous refraction, and vortex beam generation in the prescribed frequency band on the basis of 1- and 2-bit coding sequences. Simulation results agree well with theoretical predictions, showing that the designed coding devices with discrete phase profiles are efficient in engineering the wavefront of outcoming waves to form the desired spatial pattern. We anticipate the realization of coding metamaterials with broadband functionality and design flexibility to open up possibilities for novel acoustic functional devices for the special manipulation of transmitted waves and underpin diverse applications ranging from medical ultrasound imaging to acoustic detections.

  11. Decisions: "Carltona" and the CUC Code

    ERIC Educational Resources Information Center

    Evans, G. R.

    2006-01-01

    The Committee of University Chairman publishes a code of good practice designed, among other things, to ensure clarity about the authority on which decisions are taken on behalf of universities, subordinate domestic legislation created and the exercise of discretion regulated. In Carltona Ltd.v. Commissioners of Works [1943] 2 All ER 560 AC the…

  12. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  13. The theta/gamma discrete phase code occuring during the hippocampal phase precession may be a more general brain coding scheme.

    PubMed

    Lisman, John

    2005-01-01

    In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to that proposed by Singer and von der Malsburg is discussed; in their scheme, theta is not considered. It is argued that what theta provides is the absolute phase reference needed for encoding order. Theta/gamma coding therefore bears some relationship to the concept of "word" in digital computers, with word length corresponding to the number of gamma cycles within a theta cycle, and discrete phase corresponding to the ordered "place" within a word. Copyright 2005 Wiley-Liss, Inc.

  14. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  15. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  16. Cross-Paradigm Simulation Modeling: Challenges and Successes

    DTIC Science & Technology

    2011-12-01

    is also highlighted. 2.1 Discrete-Event Simulation Discrete-event simulation ( DES ) is a modeling method for stochastic, dynamic models where...which almost anything can be coded; models can be incredibly detailed. Most commercial DES software has a graphical interface which allows the user to...results. Although the above definition is the commonly accepted definition of DES , there are two different worldviews that dominate DES modeling today: a

  17. CFD Analysis of Spray Combustion and Radiation in OMV Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.; Gross, K.

    1993-01-01

    The Variable Thrust Engine (VTE), developed by TRW, for the Orbit Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The propellants are pressure fed into the combustion chamber through a single pintle injection element. The performance of this engine is dependent on the pintle geometry and a number of complex physical phenomena and their mutual interactions. The most important among these are (1) atomization of the liquid jets into fine droplets; (2) the motion of these droplets in the gas field; (3) vaporization of the droplets (4) turbulent mixing of the fuel and oxidizer; and (5) hypergolic reaction between MMH and NTO. Each of the above phenomena by itself poses a considerable challenge to the technical community. In a reactive flow field of the kind occurring inside the VTE, the mutual interactions between these physical processes tend to further complicate the analysis. The objective of this work is to develop a comprehensive mathematical modeling methodology to analyze the flow field within the VTE. Using this model, the effect of flow parameters on various physical processes such as atomization, spray dynamics, combustion, and radiation is studied. This information can then be used to optimize design parameters and thus improve the performance of the engine. The REFLEQS CFD Code is used for solving the fluid dynamic equations. The spray dynamics is modeled using the Eulerian-Lagrangian approach. The discrete ordinate method with 12 ordinate directions is used to predict the radiative heat transfer in the OMV combustion chamber, nozzle, and the heat shield. The hypergolic reaction between MMH and NTO is predicted using an equilibrium chemistry model with 13 species. The results indicate that mixing and combustion is very sensitive to the droplet size. Smaller droplets evaporate faster than bigger droplets, leading to a well mixed zone in the combustion chamber. The radiative heat flux at combustion chamber and nozzle walls are an order of negligible less than the conductive heat flux. Simulations performed with the heat shield show that a negligible amount of fluid is entrained into the heat shield region. However, the heat shield is shown to be effective in protecting the OMV structure surrounding the engine from the radiated heat.

  18. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2016-01-01

    The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

  19. Recent Local and State Action in Arizona to Maintain Sky Quality

    NASA Astrophysics Data System (ADS)

    Hall, Jeffrey C.; Shankland, P. D.; Green, R. F.; Jannuzi, B.

    2014-01-01

    The large number of observatories in Arizona has led to the development of a number of lighting control ordinances around the state, some quite strict. Several factors are now contributing to an increased need for active effort at the local, County, and State levels in maintaining the quality of these codes; these factors include an expansion of competing interests in the state, the increasing use of LED lighting, and the potential for major new investments through projects such as the Cherenkov Telescope Array (CTA) and enhancements to the Navy Precision Optical Interferometer. I will review recent strategies Arizona's observatories have used to effect maintenance of ordinances and preserve sky quality; cases include (1) a statewide effort in 2012 to curb a proliferation of electronic billboards and (2) engagement of a broad group of local, County, and State officials, as well as individuals from the private sector, in support of projects like CTA, including awareness of and support for dark-sky preservation.

  20. Emergence of spike correlations in periodically forced excitable systems

    NASA Astrophysics Data System (ADS)

    Reinoso, José A.; Torrent, M. C.; Masoller, Cristina

    2016-09-01

    In sensory neurons the presence of noise can facilitate the detection of weak information-carrying signals, which are encoded and transmitted via correlated sequences of spikes. Here we investigate the relative temporal order in spike sequences induced by a subthreshold periodic input in the presence of white Gaussian noise. To simulate the spikes, we use the FitzHugh-Nagumo model and to investigate the output sequence of interspike intervals (ISIs), we use the symbolic method of ordinal analysis. We find different types of relative temporal order in the form of preferred ordinal patterns that depend on both the strength of the noise and the period of the input signal. We also demonstrate a resonancelike behavior, as certain periods and noise levels enhance temporal ordering in the ISI sequence, maximizing the probability of the preferred patterns. Our findings could be relevant for understanding the mechanisms underlying temporal coding, by which single sensory neurons represent in spike sequences the information about weak periodic stimuli.

  1. Improving Our Ability to Evaluate Underlying Mechanisms of Behavioral Onset and Other Event Occurrence Outcomes: A Discrete-Time Survival Mediation Model

    PubMed Central

    Fairchild, Amanda J.; Abara, Winston E.; Gottschall, Amanda C.; Tein, Jenn-Yun; Prinz, Ronald J.

    2015-01-01

    The purpose of this article is to introduce and describe a statistical model that researchers can use to evaluate underlying mechanisms of behavioral onset and other event occurrence outcomes. Specifically, the article develops a framework for estimating mediation effects with outcomes measured in discrete-time epochs by integrating the statistical mediation model with discrete-time survival analysis. The methodology has the potential to help strengthen health research by targeting prevention and intervention work more effectively as well as by improving our understanding of discretized periods of risk. The model is applied to an existing longitudinal data set to demonstrate its use, and programming code is provided to facilitate its implementation. PMID:24296470

  2. Efficient Polar Coding of Quantum Information

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato

    2012-08-01

    Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.

  3. 77 FR 12098 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-28

    ... Change FINRA is proposing to amend FINRA Rule 14107 of the Code of Mediation Procedure (``Mediation Code'') to provide the Director of Mediation (``Mediation Director'') with discretion to determine whether parties to a FINRA mediation may select a mediator who is not on FINRA's mediator roster. The text of the...

  4. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  5. Development and validation of P-MODTRAN7 and P-MCScene, 1D and 3D polarimetric radiative transfer models

    NASA Astrophysics Data System (ADS)

    Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.

    2016-05-01

    A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitzmann, D., E-mail: daniel.kitzmann@csh.unibe.ch

    Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO{sub 2} dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhousemore » effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone.« less

  7. Revised users manual, Pulverized Coal Gasification or Combustion: 2-dimensional (87-PCGC-2): Final report, Volume 2. [87-PCGC-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, P.J.; Smoot, L.D.; Brewster, B.S.

    1987-12-01

    A two-dimensional, steady-state model for describing a variety of reactive and non-reactive flows, including pulverized coal combustion and gasification, is presented. Recent code revisions and additions are described. The model, referred to as 87-PCGC-2, is applicable to cylindrical axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using either a flux method or discrete ordinates method. The particle phase is modeled in a Lagrangian framework, such that mean paths of particle groups are followed. Several multi-step coal devolatilization schemes are included along withmore » a heterogeneous reaction scheme that allows for both diffusion and chemical reaction. Major gas-phase reactions are modeled assuming local instantaneous equilibrium, and thus the reaction rates are limited by the turbulent rate mixing. A NO/sub x/ finite rate chemistry submodel is included which integrates chemical kinetics and the statistics of the turbulence. The gas phase is described by elliptic partial differential equations that are solved by an iterative line-by-line technique. Under-relaxation is used to achieve numerical stability. The generalized nature of the model allows for calculation of isothermal fluid mechanicsgaseous combustion, droplet combustion, particulate combustion and various mixtures of the above, including combustion of coal-water and coal-oil slurries. Both combustion and gasification environments are permissible. User information and theory are presented, along with sample problems. 106 refs.« less

  8. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  9. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  10. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  11. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  12. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST.

    PubMed

    Xu, X Q

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (psi,theta,micro) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  13. Neoclassical simulation of tokamak plasmas using the continuum gyrokinetic code TEMPEST

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2008-07-01

    We present gyrokinetic neoclassical simulations of tokamak plasmas with a self-consistent electric field using a fully nonlinear (full- f ) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five-dimensional computational grid in phase space. The present implementation is a method of lines approach where the phase-space derivatives are discretized with finite differences, and implicit backward differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving the gyrokinetic Poisson equation with self-consistent poloidal variation. With a four-dimensional (ψ,θ,γ,μ) version of the TEMPEST code, we compute the radial particle and heat fluxes, the geodesic-acoustic mode, and the development of the neoclassical electric field, which we compare with neoclassical theory using a Lorentz collision model. The present work provides a numerical scheme for self-consistently studying important dynamical aspects of neoclassical transport and electric field in toroidal magnetic fusion devices.

  14. RINGMesh: A programming library for developing mesh-based geomodeling applications

    NASA Astrophysics Data System (ADS)

    Pellerin, Jeanne; Botella, Arnaud; Bonneau, François; Mazuyer, Antoine; Chauvin, Benjamin; Lévy, Bruno; Caumon, Guillaume

    2017-07-01

    RINGMesh is a C++ open-source programming library for manipulating discretized geological models. It is designed to ease the development of applications and workflows that use discretized 3D models. It is neither a geomodeler, nor a meshing software. RINGMesh implements functionalities to read discretized surface-based or volumetric structural models and to check their validity. The models can be then exported in various file formats. RINGMesh provides data structures to represent geological structural models, either defined by their discretized boundary surfaces, and/or by discretized volumes. A programming interface allows to develop of new geomodeling methods, and to plug in external software. The goal of RINGMesh is to help researchers to focus on the implementation of their specific method rather than on tedious tasks common to many applications. The documented code is open-source and distributed under the modified BSD license. It is available at https://www.ring-team.org/index.php/software/ringmesh.

  15. Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters

    NASA Astrophysics Data System (ADS)

    Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi

    A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.

  16. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation123

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying

    2016-01-01

    Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335

  17. Discrete space charge affected field emission: Flat and hemisphere emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Kevin L., E-mail: kevin.jensen@nrl.navy.mil; Shiffler, Donald A.; Tang, Wilkin

    Models of space-charge affected thermal-field emission from protrusions, able to incorporate the effects of both surface roughness and elongated field emitter structures in beam optics codes, are desirable but difficult. The models proposed here treat the meso-scale diode region separate from the micro-scale regions characteristic of the emission sites. The consequences of discrete emission events are given for both one-dimensional (sheets of charge) and three dimensional (rings of charge) models: in the former, results converge to steady state conditions found by theory (e.g., Rokhlenko et al. [J. Appl. Phys. 107, 014904 (2010)]) but show oscillatory structure as they do. Surfacemore » roughness or geometric features are handled using a ring of charge model, from which the image charges are found and used to modify the apex field and emitted current. The roughness model is shown to have additional constraints related to the discrete nature of electron charge. The ability of a unit cell model to treat field emitter structures and incorporate surface roughness effects inside a beam optics code is assessed.« less

  18. Coupled discrete element and finite volume solution of two classical soil mechanics problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Feng; Drumm, Eric; Guiochon, Georges A

    One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less

  19. Computation of Steady and Unsteady Laminar Flames: Theory

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai

    1999-01-01

    In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.

  20. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  1. Radiative transfer analyses of Titan's tropical atmosphere

    NASA Astrophysics Data System (ADS)

    Griffith, Caitlin A.; Doose, Lyn; Tomasko, Martin G.; Penteado, Paulo F.; See, Charles

    2012-04-01

    Titan's optical and near-IR spectra result primarily from the scattering of sunlight by haze and its absorption by methane. With a column abundance of 92 km amagat (11 times that of Earth), Titan's atmosphere is optically thick and only ˜10% of the incident solar radiation reaches the surface, compared to 57% on Earth. Such a formidable atmosphere obstructs investigations of the moon's lower troposphere and surface, which are highly sensitive to the radiative transfer treatment of methane absorption and haze scattering. The absorption and scattering characteristics of Titan's atmosphere have been constrained by the Huygens Probe Descent Imager/Spectral Radiometer (DISR) experiment for conditions at the probe landing site (Tomasko, M.G., Bézard, B., Doose, L., Engel, S., Karkoschka, E. [2008a]. Planet. Space Sci. 56, 624-247; Tomasko, M.G. et al. [2008b]. Planet. Space Sci. 56, 669-707). Cassini's Visual and Infrared Mapping Spectrometer (VIMS) data indicate that the rest of the atmosphere (except for the polar regions) can be understood with small perturbations in the high haze structure determined at the landing site (Penteado, P.F., Griffith, C.A., Tomasko, M.G., Engel, S., See, C., Doose, L., Baines, K.H., Brown, R.H., Buratti, B.J., Clark, R., Nicholson, P., Sotin, C. [2010]. Icarus 206, 352-365). However the in situ measurements were analyzed with a doubling and adding radiative transfer calculation that differs considerably from the discrete ordinates codes used to interpret remote data from Cassini and ground-based measurements. In addition, the calibration of the VIMS data with respect to the DISR data has not yet been tested. Here, VIMS data of the probe landing site are analyzed with the DISR radiative transfer method and the faster discrete ordinates radiative transfer calculation; both models are consistent (to within 0.3%) and reproduce the scattering and absorption characteristics derived from in situ measurements. Constraints on the atmospheric opacity at wavelengths outside those measured by DISR, that is from 1.6 to 5.0 μm, are derived using clouds as diffuse reflectors in order to derive Titan's surface albedo to within a few percent error and cloud altitudes to within 5 km error. VIMS spectra of Titan at 2.6-3.2 μm indicate not only spectral features due to CH4 and CH3D (Rannou, P., Cours, T., Le Mouélic, S., Rodriguez, S., Sotin, C., Drossart, P., Brown, R. [2010]. Icarus 208, 850-867), but also a fairly uniform absorption of unknown source, equivalent to the effects of a darkening of the haze to a single scattering albedo of 0.63 ± 0.05. Titan's 4.8 μm spectrum point to a haze optical depth of 0.2 at that wavelength. Cloud spectra at 2 μm indicate that the far wings of the Voigt profile extend 460 cm-1 from methane line centers. This paper releases the doubling and adding radiative transfer code developed by the DISR team, so that future studies of Titan's atmosphere and surface are consistent with the findings by the Huygens Probe. We derive the surface albedo at eight spectral regions of the 8 × 12 km2 area surrounding the Huygens landing site. Within the 0.4-1.6 μm spectral region our surface albedos match DISR measurements, indicating that DISR and VIMS measurements are consistently calibrated. These values together with albedos at longer 1.9-5.0 μm wavelengths, not sampled by DISR, resemble a dark version of the spectrum of Ganymede's icy leading hemisphere. The eight surface albedos of the landing site are consistent with, but not deterministic of, exposed water ice with dark impurities.

  2. A discrete Fourier transform for virtual memory machines

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1992-01-01

    An algebraic theory of the Discrete Fourier Transform is developed in great detail. Examination of the details of the theory leads to a computationally efficient fast Fourier transform for the use on computers with virtual memory. Such an algorithm is of great use on modern desktop machines. A FORTRAN coded version of the algorithm is given for the case when the sequence of numbers to be transformed is a power of two.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  4. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  5. Direct Discrete Method for Neutronic Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less

  6. A Bell-Curved Based Algorithm for Mixed Continuous and Discrete Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Weber, Michael; Sobieszczanski-Sobieski, Jaroslaw

    2001-01-01

    An evolutionary based strategy utilizing two normal distributions to generate children is developed to solve mixed integer nonlinear programming problems. This Bell-Curve Based (BCB) evolutionary algorithm is similar in spirit to (mu + mu) evolutionary strategies and evolutionary programs but with fewer parameters to adjust and no mechanism for self adaptation. First, a new version of BCB to solve purely discrete optimization problems is described and its performance tested against a tabu search code for an actuator placement problem. Next, the performance of a combined version of discrete and continuous BCB is tested on 2-dimensional shape problems and on a minimum weight hub design problem. In the latter case the discrete portion is the choice of the underlying beam shape (I, triangular, circular, rectangular, or U).

  7. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  8. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  9. Is Best-Worst Scaling Suitable for Health State Valuation? A Comparison with Discrete Choice Experiments.

    PubMed

    Krucien, Nicolas; Watson, Verity; Ryan, Mandy

    2017-12-01

    Health utility indices (HUIs) are widely used in economic evaluation. The best-worst scaling (BWS) method is being used to value dimensions of HUIs. However, little is known about the properties of this method. This paper investigates the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE). Using a parametric approach, we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals' preferences. Non parametric tests indicate that BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting that the BWS provides lower quality data. As a consequence, for both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and using HUI based on BWS data. Given existing evidence, it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  11. Achievable Information Rates for Coded Modulation With Hard Decision Decoding for Coherent Fiber-Optic Systems

    NASA Astrophysics Data System (ADS)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi

    2017-12-01

    We analyze the achievable information rates (AIRs) for coded modulation schemes with QAM constellations with both bit-wise and symbol-wise decoders, corresponding to the case where a binary code is used in combination with a higher-order modulation using the bit-interleaved coded modulation (BICM) paradigm and to the case where a nonbinary code over a field matched to the constellation size is used, respectively. In particular, we consider hard decision decoding, which is the preferable option for fiber-optic communication systems where decoding complexity is a concern. Recently, Liga \\emph{et al.} analyzed the AIRs for bit-wise and symbol-wise decoders considering what the authors called \\emph{hard decision decoder} which, however, exploits \\emph{soft information} of the transition probabilities of discrete-input discrete-output channel resulting from the hard detection. As such, the complexity of the decoder is essentially the same as the complexity of a soft decision decoder. In this paper, we analyze instead the AIRs for the standard hard decision decoder, commonly used in practice, where the decoding is based on the Hamming distance metric. We show that if standard hard decision decoding is used, bit-wise decoders yield significantly higher AIRs than symbol-wise decoders. As a result, contrary to the conclusion by Liga \\emph{et al.}, binary decoders together with the BICM paradigm are preferable for spectrally-efficient fiber-optic systems. We also design binary and nonbinary staircase codes and show that, in agreement with the AIRs, binary codes yield better performance.

  12. MESHMAKER (MM) V1.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MORIDIS, GEORGE

    2016-05-02

    MeshMaker v1.5 is a code that describes the system geometry and discretizes the domain in problems of flow and transport through porous and fractured media that are simulated using the TOUGH+ [Moridis and Pruess, 2014] or TOUGH2 [Pruess et al., 1999; 2012] families of codes. It is a significantly modified and drastically enhanced version of an earlier simpler facility that was embedded in the TOUGH2 codes [Pruess et al., 1999; 2012], from which it could not be separated. The code (MeshMaker.f90) is a stand-alone product written in FORTRAN 95/2003, is written according to the tenets of Object-Oriented Programming, has amore » modular structure and can perform a number of mesh generation and processing operations. It can generate two-dimensional radially symmetric (r,z) meshes, and one-, two-, and three-dimensional rectilinear (Cartesian) grids in (x,y,z). The code generates the file MESH, which includes all the elements and connections that describe the discretized simulation domain and conforming to the requirements of the TOUGH+ and TOUGH2 codes. Multiple-porosity processing for simulation of flow in naturally fractured reservoirs can be invoked by means of a keyword MINC, which stands for Multiple INteracting Continua. The MINC process operates on the data of the primary (porous medium) mesh as provided on disk file MESH, and generates a secondary mesh containing fracture and matrix elements with identical data formats on file MINC.« less

  13. Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes

    DTIC Science & Technology

    2018-01-01

    an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL

  14. Enhanced verification test suite for physics simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  15. The emergence of temporal language in Nicaraguan Sign Language

    PubMed Central

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-01-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. PMID:27591549

  16. The emergence of temporal language in Nicaraguan Sign Language.

    PubMed

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-11-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  18. Coordinate control of initiative mating device for autonomous underwater vehicle based on TDES

    NASA Astrophysics Data System (ADS)

    Yan, Zhe-Ping; Hou, Shu-Ping

    2005-06-01

    A novel initiative mating device, which has four 2-degree manipulators around the mating skirt, is proposed to mate between a skirt of AUV (autonomons underwater vehicle) and a disabled submarine. The primary function of the device is to keep exact mating between skirt and disabled submarine in a badly sub sea environment. According to the characteristic of rescue, an automaton model is brought foward to describe the mating proceed between AUV and manipulators. The coordinated control is implemented by the TDES (time discrete event system). After taking into account the time problem, it is a useful method to control mating by simulation testing. The result shows that it reduces about 70 seconds after using intelligent co-ordinate control based on TDES through the whole mating procedure.

  19. Revisiting the Scattering Greenhouse Effect of CO2 Ice Clouds

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.

    2016-02-01

    Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO2 dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhouse effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone.

  20. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  1. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  2. Processing Ordinality and Quantity: The Case of Developmental Dyscalculia

    PubMed Central

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made “ordered” or “non-ordered” judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information. PMID:21935374

  3. Processing ordinality and quantity: the case of developmental dyscalculia.

    PubMed

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.

  4. Important features of home-based support services for older Australians and their informal carers.

    PubMed

    McCaffrey, Nikki; Gill, Liz; Kaambwa, Billingsley; Cameron, Ian D; Patterson, Jan; Crotty, Maria; Ratcliffe, Julie

    2015-11-01

    In Australia, newly initiated, publicly subsidised 'Home-Care Packages' designed to assist older people (≥ 65 years of age) living in their own home must now be offered on a 'consumer-directed care' (CDC) basis by service providers. However, CDC models have largely developed in the absence of evidence on users' views and preferences. The aim of this study was to determine what features (attributes) of consumer-directed, home-based support services are important to older people and their informal carers to inform the design of a discrete choice experiment (DCE). Semi-structured, face-to-face interviews were conducted in December 2012-November 2013 with 17 older people receiving home-based support services and 10 informal carers from 5 providers located in South Australia and New South Wales. Salient service characteristics important to participants were determined using thematic and constant comparative analysis and formulated into attributes and attribute levels for presentation within a DCE. Initially, eight broad themes were identified: information and knowledge, choice and control, self-managed continuum, effective co-ordination, effective communication, responsiveness and flexibility, continuity and planning. Attributes were formulated for the DCE by combining overlapping themes such as effective communication and co-ordination, and the self-managed continuum and planning into single attributes. Six salient service features that characterise consumer preferences for the provision of home-based support service models were identified: choice of provider, choice of support worker, flexibility in care activities provided, contact with the service co-ordinator, managing the budget and saving unspent funds. Best practice indicates that qualitative research with individuals who represent the population of interest should guide attribute selection for a DCE and this is the first study to employ such methods in aged care service provision. Further development of services could incorporate methods of consumer engagement such as DCEs which facilitate the identification and quantification of users' views and preferences on alternative models of delivery. © 2015 John Wiley & Sons Ltd.

  5. Numerical Modeling of Physical Vapor Transport in Contactless Crystal Growth Geometry

    NASA Technical Reports Server (NTRS)

    Palosz, W.; Lowry, S.; Krishnam, A.; Przekwas, A.; Grasza, K.

    1998-01-01

    Growth from the vapor under conditions of limited contact with the walls of the growth ampoule is beneficial for the quality of the growing crystal due to reduced stress and contamination which may be caused by interactions with the growth container. The technique may be of a particular interest for studies on crystal growth under microgravity conditions: elimination of some factors affecting the crystal quality may make interpretation of space-conducted processes more conclusive and meaningful. For that reason, and as a part of our continuing studies on 'contactless' growth technique, we have developed a computational model of crystal growth process in such system. The theoretical model was built, and simulations were performed using the commercial computational fluid dynamics code, (CFD) ACE. The code uses an implicit finite volume formulation with a gray discrete ordinate method radiation model which accounts for the diffuse absorption and reflection of radiation throughout the furnace. The three-dimensional model computes the heat transfer through the crystal, quartz, and gas both inside and outside the ampoule, and mass transport from the source to the crystal and the sink. The heat transport mechanisms by conduction, natural convection, and radiation, and mass transport by diffusion and convection are modeled simultaneously and include the heat of the phase transition at the solid-vapor interfaces. As the thermal boundary condition, temperature profile along the walls of the furnace is used. For different thermal profiles and furnace and ampoule dimensions, the crystal growth rate and development of the crystal-vapor and source-vapor interfaces (change of the interface shape and location with time) are obtained. Super/under-saturation in the ampoule is determined and critical factors determining the 'contactless' growth conditions are identified and discussed. The relative importance of the ampoule dimensions and geometry, the furnace dimensions and its temperature, and the properties of the grown material are analyzed. The results of the simulations are compared with related experimental results on growth of CdTe, CdZnTe, ZnTe, PbTe, and PbSnTe crystals by this technique.

  6. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  7. Analyzing neuronal networks using discrete-time dynamics

    NASA Astrophysics Data System (ADS)

    Ahn, Sungwoo; Smith, Brian H.; Borisyuk, Alla; Terman, David

    2010-05-01

    We develop mathematical techniques for analyzing detailed Hodgkin-Huxley like models for excitatory-inhibitory neuronal networks. Our strategy for studying a given network is to first reduce it to a discrete-time dynamical system. The discrete model is considerably easier to analyze, both mathematically and computationally, and parameters in the discrete model correspond directly to parameters in the original system of differential equations. While these networks arise in many important applications, a primary focus of this paper is to better understand mechanisms that underlie temporally dynamic responses in early processing of olfactory sensory information. The models presented here exhibit several properties that have been described for olfactory codes in an insect’s Antennal Lobe. These include transient patterns of synchronization and decorrelation of sensory inputs. By reducing the model to a discrete system, we are able to systematically study how properties of the dynamics, including the complex structure of the transients and attractors, depend on factors related to connectivity and the intrinsic and synaptic properties of cells within the network.

  8. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  9. Neoclassical Simulation of Tokamak Plasmas using Continuum Gyrokinetc Code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X Q

    We present gyrokinetic neoclassical simulations of tokamak plasmas with self-consistent electric field for the first time using a fully nonlinear (full-f) continuum code TEMPEST in a circular geometry. A set of gyrokinetic equations are discretized on a five dimensional computational grid in phase space. The present implementation is a Method of Lines approach where the phase-space derivatives are discretized with finite differences and implicit backwards differencing formulas are used to advance the system in time. The fully nonlinear Boltzmann model is used for electrons. The neoclassical electric field is obtained by solving gyrokinetic Poisson equation with self-consistent poloidal variation. Withmore » our 4D ({psi}, {theta}, {epsilon}, {mu}) version of the TEMPEST code we compute radial particle and heat flux, the Geodesic-Acoustic Mode (GAM), and the development of neoclassical electric field, which we compare with neoclassical theory with a Lorentz collision model. The present work provides a numerical scheme and a new capability for self-consistently studying important aspects of neoclassical transport and rotations in toroidal magnetic fusion devices.« less

  10. A Computer Program for Flow-Log Analysis of Single Holes (FLASH)

    USGS Publications Warehouse

    Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.

    2011-01-01

    A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.

  11. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  12. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  13. SmaggIce User Guide. 1.0

    NASA Technical Reports Server (NTRS)

    Baez, Marivell; Vickerman, Mary; Choo, Yung

    2000-01-01

    SmaggIce (Surface Modeling And Grid Generation for Iced Airfoils) is one of NASNs aircraft icing research codes developed at the Glenn Research Center. It is a software toolkit used in the process of aerodynamic performance prediction of iced airfoils. It includes tools which complement the 2D grid-based Computational Fluid Dynamics (CFD) process: geometry probing; surface preparation for gridding: smoothing and re-discretization of geometry. Future releases will also include support for all aspects of gridding: domain decomposition; perimeter discretization; grid generation and modification.

  14. The effect of ordinances requiring smoke-free restaurants and bars on revenues: a follow-up.

    PubMed Central

    Glantz, S A; Smith, L R

    1997-01-01

    OBJECTIVES: The purpose of this study was to extend an earlier evaluation of the economic effects of ordinances requiring smoke-free restaurants and bars. METHODS: Sales tax data for 15 cities with smoke-free restaurant ordinances, 5 cities and 2 counties with smoke-free bar ordinances, and matched comparison locations were analyzed by multiple regression, including time and a dummy variable for the ordinance. RESULTS: Ordinances had no significant effect on the fraction of total retail sales that went to eating and drinking places or on the ratio between sales in communities with ordinances and sales in comparison communities. Ordinances requiring smoke-free bars had no significant effect on the fraction of revenues going to eating and drinking places that serve all types of liquor. CONCLUSIONS: Smoke-free ordinances do not adversely affect either restaurant or bar sales. PMID:9357356

  15. One-Dimensional Czedli-Type Islands

    ERIC Educational Resources Information Center

    Horvath, Eszter K.; Mader, Attila; Tepavcevic, Andreja

    2011-01-01

    The notion of an island has surfaced in recent algebra and coding theory research. Discrete versions provide interesting combinatorial problems. This paper presents the one-dimensional case with finitely many heights, a topic convenient for student research.

  16. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  17. Validation of GOSAT XCO2 and XCH4 retrieved by PPDF-S method and evaluation of sensitivity of aerosols to gas concentrations

    NASA Astrophysics Data System (ADS)

    Iwasaki, C.; Imasu, R.; Bril, A.; Yokota, T.; Yoshida, Y.; Morino, I.; Oshchepkov, S.; Rokotyan, N.; Zakharov, V.; Gribanov, K.

    2017-12-01

    Photon path length probability density function-Simultaneous (PPDF-S) method is one of effective algorithms for retrieving column-averaged concentrations of carbon dioxide (XCO2) and methane (XCH4) from Greenhouse gases Observing SATellite (GOSAT) spectra in Short Wavelength InfraRed (SWIR) [Oshchepkov et al., 2013]. In this study, we validated XCO2 and XCH4 retrieved by the PPDF-S method through comparison with the Total Carbon Column Observing Network (TCCON) data [Wunch et al., 2011] from 26 sites including additional site of the Ural Atmospheric Station at Kourovka [57.038°N and 59.545°E], Russia. Validation results using TCCON data show that bias and its standard deviation of PPDF-S data are respectively 0.48 and 2.10 ppm for XCO2, and -0.73 and 15.77 ppb for XCH4. The results for XCO2 are almost identical with those of Iwasaki et al. [2017] for which the validation data were limited at selected 11 sites. However, the bias of XCH4 shows opposite sign against that of Iwasaki et al. [2017]. Furthermore, the data at Kourouvka showed different features particularly for XCH4. In order to investigate the causes for the differences, we have carried out simulation studies mainly focusing on the effects of aerosols which modify the light path length of solar radiation [O'Brien and Rayner, 2002; Aben et al., 2007; Oshchepkov et al., 2008]. Based on the simulation studies using multiple radiation transfer code based on Discrete Ordinate Method (DOM), Polarization System for Transfer of Atmospheric Radiation3 (Pstar3) [Ota et al., 2010], sensitivity of aerosols to gas concentrations was examined.

  18. A synthetic data set of high-spectral-resolution infrared spectra for the Arctic atmosphere

    NASA Astrophysics Data System (ADS)

    Cox, Christopher J.; Rowe, Penny M.; Neshyba, Steven P.; Walden, Von P.

    2016-05-01

    Cloud microphysical and macrophysical properties are critical for understanding the role of clouds in climate. These properties are commonly retrieved from ground-based and satellite-based infrared remote sensing instruments. However, retrieval uncertainties are difficult to quantify without a standard for comparison. This is particularly true over the polar regions, where surface-based data for a cloud climatology are sparse, yet clouds represent a major source of uncertainty in weather and climate models. We describe a synthetic high-spectral-resolution infrared data set that is designed to facilitate validation and development of cloud retrieval algorithms for surface- and satellite-based remote sensing instruments. Since the data set is calculated using pre-defined cloudy atmospheres, the properties of the cloud and atmospheric state are known a priori. The atmospheric state used for the simulations is drawn from radiosonde measurements made at the North Slope of Alaska (NSA) Atmospheric Radiation Measurement (ARM) site at Barrow, Alaska (71.325° N, 156.615° W), a location that is generally representative of the western Arctic. The cloud properties for each simulation are selected from statistical distributions derived from past field measurements. Upwelling (at 60 km) and downwelling (at the surface) infrared spectra are simulated for 260 cloudy cases from 50 to 3000 cm-1 (3.3 to 200 µm) at monochromatic (line-by-line) resolution at a spacing of ˜ 0.01 cm-1 using the Line-by-line Radiative Transfer Model (LBLRTM) and the discrete-ordinate-method radiative transfer code (DISORT). These spectra are freely available for interested researchers from the NSF Arctic Data Center data repository (doi:10.5065/D61J97TT).

  19. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  20. An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations

    NASA Astrophysics Data System (ADS)

    Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  1. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  2. SMRT: A new, modular snow microwave radiative transfer model

    NASA Astrophysics Data System (ADS)

    Picard, Ghislain; Sandells, Melody; Löwe, Henning; Dumont, Marie; Essery, Richard; Floury, Nicolas; Kontu, Anna; Lemmetyinen, Juha; Maslanka, William; Mätzler, Christian; Morin, Samuel; Wiesmann, Andreas

    2017-04-01

    Forward models of radiative transfer processes are needed to interpret remote sensing data and derive measurements of snow properties such as snow mass. A key requirement and challenge for microwave emission and scattering models is an accurate description of the snow microstructure. The snow microwave radiative transfer model (SMRT) was designed to cater for potential future active and/or passive satellite missions and developed to improve understanding of how to parameterize snow microstructure. SMRT is implemented in Python and is modular to allow easy intercomparison of different theoretical approaches. Separate modules are included for the snow microstructure model, electromagnetic module, radiative transfer solver, substrate, interface reflectivities, atmosphere and permittivities. An object-oriented approach is used with carefully specified exchanges between modules to allow future extensibility i.e. without constraining the parameter list requirements. This presentation illustrates the capabilities of SMRT. At present, five different snow microstructure models have been implemented, and direct insertion of the autocorrelation function from microtomography data is also foreseen with SMRT. Three electromagnetic modules are currently available. While DMRT-QCA and Rayleigh models need specific microstructure models, the Improved Born Approximation may be used with any microstructure representation. A discrete ordinates approach with stream connection is used to solve the radiative transfer equations, although future inclusion of 6-flux and 2-flux solvers are envisioned. Wrappers have been included to allow existing microwave emission models (MEMLS, HUT, DMRT-QMS) to be run with the same inputs and minimal extra code (2 lines). Comparisons between theoretical approaches will be shown, and evaluation against field experiments in the frequency range 5-150 GHz. SMRT is simple and elegant to use whilst providing a framework for future development within the community.

  3. An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations

    NASA Technical Reports Server (NTRS)

    Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  4. A high-resolution oxygen A-band spectrometer (HABS) and its radiation closure

    NASA Astrophysics Data System (ADS)

    Min, Q.; Yin, B.; Li, S.; Berndt, J.; Harrison, L.; Joseph, E.; Duan, M.; Kiedron, P.

    2014-02-01

    The pressure dependence of oxygen A-band absorption enables the retrieval of the vertical profiles of aerosol and cloud properties from oxygen A-band spectrometry. To improve the understanding of oxygen A-band inversions and utility, we developed a high-resolution oxygen A-band spectrometer (HABS), and deployed it at Howard University Beltsville site during the NASA Discover Air-Quality Field Campaign in July 2011. The HABS has the ability to measure solar direct-beam and zenith diffuse radiation through a telescope automatically. It exhibits excellent performance: stable spectral response ratio, high signal-to-noise ratio (SNR), high spectrum resolution (0.16 nm), and high Out-of-Band Rejection (10-5). To evaluate the spectra performance of HABS, a HABS simulator has been developed by combing the discrete ordinates radiative transfer (DISORT) code with the High Resolution Transmission (HTRAN) database HITRAN2008. The simulator uses double-k approach to reduce the computational cost. The HABS measured spectra are consistent with the related simulated spectra. For direct-beam spectra, the confidence intervals (95%) of relative difference between measurements and simulation are (-0.06, 0.05) and (-0.08, 0.09) for solar zenith angles of 27° and 72°, respectively. The main differences between them occur at or near the strong oxygen absorption line centers. They are mainly caused by the noise/spikes of HABS measured spectra, as a result of combined effects of weak signal, low SNR, and errors in wavelength registration and absorption line parameters. The high-resolution oxygen A-band measurements from HABS can constrain the active radar retrievals for more accurate cloud optical properties, particularly for multi-layer clouds and for mixed-phase clouds.

  5. Transport and discrete particle noise in gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas; Lee, W. W.

    2006-10-01

    We present results from our recent investigations regarding the effects of discrete particle noise on the long-time behavior and transport properties of gyrokinetic particle-in-cell simulations. It is found that the amplitude of nonlinearly saturated drift waves is unaffected by discreteness-induced noise in plasmas whose behavior is dominated by a single mode in the saturated state. We further show that the scaling of this noise amplitude with particle count is correctly predicted by the fluctuation-dissipation theorem, even though the drift waves have driven the plasma from thermal equilibrium. As well, we find that the long-term behavior of the saturated system is unaffected by discreteness-induced noise even when multiple modes are included. Additional work utilizing a code with both total-f and δf capabilities is also presented, as part of our efforts to better understand the long- time balance between entropy production, collisional dissipation, and particle/heat flux in gyrokinetic plasmas.

  6. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  7. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  8. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  9. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  10. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  11. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  12. Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes

    ERIC Educational Resources Information Center

    Mehta, Paras D.; Neale, Michael C.; Flay, Brian R.

    2004-01-01

    A didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the…

  13. Correlation of ground motion and intensity for the 17 January 1994 Northridge, California, earthquake

    USGS Publications Warehouse

    Boatwright, J.; Thywissen, K.; Seekins, L.C.

    2001-01-01

    We analyze the correlations between intensity and a set of groundmotion parameters obtained from 66 free-field stations in Los Angeles County that recorded the 1994 Northridge earthquake. We use the tagging intensities from Thywissen and Boatwright (1998) because these intensities are determined independently on census tracts, rather than interpolated from zip codes, as are the modified Mercalli isoseismals from Dewey et al. (1995). The ground-motion parameters we consider are the peak ground acceleration (PGA), the peak ground velocity (PGV), the 5% damped pseudovelocity response spectral (PSV) ordinates at 14 periods from 0.1 to 7.5 sec, and the rms average of these spectral ordinates from 0.3 to 3 sec. Visual comparisons of the distribution of tagging intensity with contours of PGA, PGV, and the average PSV suggest that PGV and the average PSV are better correlated with the intensity than PGA. The correlation coefficients between the intensity and the ground-motion parameters bear this out: r = 0.75 for PGA, 0.85 for PGV, and 0.85 for the average PSV. Correlations between the intensity and the PSV ordinates, as a function of period, are strongest at 1.5 sec (r = 0.83) and weakest at 0.2 sec (r = 0.66). Regressing the intensity on the logarithms of these ground-motion parameters yields relations I ?? mlog?? with 3.0 ??? m ??? 5.2 for the parameters analyzed, where m = 4.4 ?? 0.7 for PGA, 3.4 ?? 0.4 for PGV, and 3.6 ?? 0.5 for the average PSV.

  14. Entropic Lattice Boltzmann Simulations of Turbulence

    NASA Astrophysics Data System (ADS)

    Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey

    2006-10-01

    Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.

  15. SIERRA/Aero Theory Manual Version 4.46.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal/Fluid Team

    2017-09-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  16. SIERRA/Aero Theory Manual Version 4.44

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal /Fluid Team

    2017-04-01

    SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less

  17. Computer simulations of phase field drops on super-hydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Fedeli, Livio

    2017-09-01

    We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.

  18. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  19. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  20. Temporal and Rate Coding for Discrete Event Sequences in the Hippocampus.

    PubMed

    Terada, Satoshi; Sakurai, Yoshio; Nakahara, Hiroyuki; Fujisawa, Shigeyoshi

    2017-06-21

    Although the hippocampus is critical to episodic memory, neuronal representations supporting this role, especially relating to nonspatial information, remain elusive. Here, we investigated rate and temporal coding of hippocampal CA1 neurons in rats performing a cue-combination task that requires the integration of sequentially provided sound and odor cues. The majority of CA1 neurons displayed sensory cue-, combination-, or choice-specific (simply, "event"-specific) elevated discharge activities, which were sustained throughout the event period. These event cells underwent transient theta phase precession at event onset, followed by sustained phase locking to the early theta phases. As a result of this unique single neuron behavior, the theta sequences of CA1 cell assemblies of the event sequences had discrete representations. These results help to update the conceptual framework for space encoding toward a more general model of episodic event representations in the hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. EnvironmentalWaveletTool: Continuous and discrete wavelet analysis and filtering for environmental time series

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Pla, C.; Fernandez-Cortes, A.; Cuezva, S.; Ortiz, J.; Benavente, D.

    2014-10-01

    A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time-period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.

  2. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  3. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  4. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  5. EAC: A program for the error analysis of STAGS results for plates

    NASA Technical Reports Server (NTRS)

    Sistla, Rajaram; Thurston, Gaylen A.; Bains, Nancy Jane C.

    1989-01-01

    A computer code is now available for estimating the error in results from the STAGS finite element code for a shell unit consisting of a rectangular orthotropic plate. This memorandum contains basic information about the computer code EAC (Error Analysis and Correction) and describes the connection between the input data for the STAGS shell units and the input data necessary to run the error analysis code. The STAGS code returns a set of nodal displacements and a discrete set of stress resultants; the EAC code returns a continuous solution for displacements and stress resultants. The continuous solution is defined by a set of generalized coordinates computed in EAC. The theory and the assumptions that determine the continuous solution are also outlined in this memorandum. An example of application of the code is presented and instructions on its usage on the Cyber and the VAX machines have been provided.

  6. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  7. Is a Genome a Codeword of an Error-Correcting Code?

    PubMed Central

    Kleinschmidt, João H.; Silva-Filho, Márcio C.; Bim, Edson; Herai, Roberto H.; Yamagishi, Michel E. B.; Palazzo, Reginaldo

    2012-01-01

    Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction. PMID:22649495

  8. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  9. Shock-wave structure in a partially ionized gas

    NASA Technical Reports Server (NTRS)

    Lu, C. S.; Huang, A. B.

    1974-01-01

    The structure of a steady plane shock in a partially ionized gas has been investigated using the Boltzmann equation with a kinetic model as the governing equation and the discrete ordinate method as a tool. The effects of the electric field induced by the charge separation on the shock structure have also been studied. Although the three species of an ionized gas travel with approximately the same macroscopic velocity, the individual distribution functions are found to be very different. In a strong shock the atom distribution function may have double peaks, while the ion distribution function has only one peak. Electrons are heated up much earlier than ions and atoms in a partially ionized gas. Because the interactions of electrons with atoms and with ions are different, the ion temperature can be different from the atom temperature.

  10. Coexistence of cyclic (CH3OH)2(H2O)8 heterodecamer and acyclic water trimer in the channels of silver-azelate framework

    NASA Astrophysics Data System (ADS)

    Luo, Geng-Geng; Zhu, Rui-Min; He, Wei-Jun; Li, Ming-Zhi; Zhao, Qing-Hua; Li, Dong-Xu; Dai, Jing-Cao

    2012-08-01

    Flexible azelaic acid (H2aze) and 1,3-bis(4-pyridyl)propane) (bpp) react ultrasonically with silver(I) oxide, generating a new metal-organic framework [Ag2(bpp)2(aze)·7H2O·CH3OH]n (1) that forms a 3D supramolecular structure through H-bonding interactions between solvent molecules and carboxylate O atoms with void spaces. Two kinds of solvent clusters, discrete cyclic (CH3OH)2(H2O)8 heterodecameric and acyclic water trimeric clusters occupy the channels in the structure. Furthermore, 1 exhibits strong photoluminescence maximized at 500 nm upon 350 nm excitation at room temperature, of which CIE chromaticity ordinate (x = 0.28, y = 0.44) is close to that of edge of green component.

  11. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  12. Calculation and experimental validation of spectral properties of microsize grains surrounded by nanoparticles.

    PubMed

    Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong

    2014-04-07

    Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.

  13. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  14. An efficient direct solver for rarefied gas flows with arbitrary statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan

    2016-01-15

    A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less

  15. Unified implicit kinetic scheme for steady multiscale heat transfer based on the phonon Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuang; Guo, Zhaoli; Chen, Songze

    2017-12-01

    An implicit kinetic scheme is proposed to solve the stationary phonon Boltzmann transport equation (BTE) for multiscale heat transfer problem. Compared to the conventional discrete ordinate method, the present method employs a macroscopic equation to accelerate the convergence in the diffusive regime. The macroscopic equation can be taken as a moment equation for phonon BTE. The heat flux in the macroscopic equation is evaluated from the nonequilibrium distribution function in the BTE, while the equilibrium state in BTE is determined by the macroscopic equation. These two processes exchange information from different scales, such that the method is applicable to the problems with a wide range of Knudsen numbers. Implicit discretization is implemented to solve both the macroscopic equation and the BTE. In addition, a memory reduction technique, which is originally developed for the stationary kinetic equation, is also extended to phonon BTE. Numerical comparisons show that the present scheme can predict reasonable results both in ballistic and diffusive regimes with high efficiency, while the memory requirement is on the same order as solving the Fourier law of heat conduction. The excellent agreement with benchmark and the rapid converging history prove that the proposed macro-micro coupling is a feasible solution to multiscale heat transfer problems.

  16. Deregulation, Distrust, and Democracy: State and Local Action to Ensure Equitable Access to Healthy, Sustainably Produced Food.

    PubMed

    Wiley, Lindsay F

    2015-01-01

    Environmental, public health, alternative food, and food justice advocates are working together to achieve incremental agricultural subsidy and nutrition assistance reforms that increase access to fresh fruits and vegetables. When it comes to targeting food and beverage products for increased regulation and decreased consumption, however, the priorities of various food reform movements diverge. This article argues that foundational legal issues, including preemption of state and local authority to protect the public's health and welfare, increasing First Amendment protection for commercial speech, and eroding judicial deference to legislative policy judgments, present a more promising avenue for collaboration across movements than discrete food reform priorities around issues like sugary drinks, genetic modification, or organics. Using the Vermont Genetically Modified Organism (GMO) Labeling Act litigation, the Kauai GMO Cultivation Ordinance litigation, the New York City Sugary Drinks Portion Rule litigation, and the Cleveland Trans Fat Ban litigation as case studies, I discuss the foundational legal challenges faced by diverse food reformers, even when their discrete reform priorities diverge. I also 'explore the broader implications of cooperation among groups that respond differently to the "irrationalities" (from the public health perspective) or "values" (from the environmental and alternative food perspective) that permeate public risk perception for democratic governance in the face of scientific uncertainty.

  17. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  18. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  19. A pipeline design of a fast prime factor DFT on a finite field

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun

    1988-01-01

    A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.

  20. An approach to solve group-decision-making problems with ordinal interval numbers.

    PubMed

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  1. Clean Indoor Air Ordinance Coverage in the Appalachian Region of the United States

    PubMed Central

    Liber, Alex; Pennell, Michael; Nealy, Darren; Hammer, Jana; Berman, Micah

    2010-01-01

    Objectives. We sought to quantitatively examine the pattern of, and socioeconomic factors associated with, adoption of clean indoor air ordinances in Appalachia. Methods. We collected and reviewed clean indoor air ordinances in Appalachian communities in 6 states and rated the ordinances for completeness of coverage in workplaces, restaurants, and bars. Additionally, we computed a strength score to measure coverage in 7 locations. We fit mixed-effects models to determine whether the presence of a comprehensive ordinance and the ordinance strength were related to community socioeconomic disadvantage. Results. Of the 332 communities included in the analysis, fewer than 20% had adopted a comprehensive workplace, restaurant, or bar ordinance. Most ordinances were weak, achieving on average only 43% of the total possible points. Communities with a higher unemployment rate were less likely and those with a higher education level were more likely to have a strong ordinance. Conclusions. The majority of residents in these communities are not protected from secondhand smoke. Efforts to pass strong statewide clean indoor air laws should take priority over local initiatives in these states. PMID:20466957

  2. FOURTH SEMINAR TO THE MEMORY OF D.N. KLYSHKO: Algebraic solution of the synthesis problem for coded sequences

    NASA Astrophysics Data System (ADS)

    Leukhin, Anatolii N.

    2005-08-01

    The algebraic solution of a 'complex' problem of synthesis of phase-coded (PC) sequences with the zero level of side lobes of the cyclic autocorrelation function (ACF) is proposed. It is shown that the solution of the synthesis problem is connected with the existence of difference sets for a given code dimension. The problem of estimating the number of possible code combinations for a given code dimension is solved. It is pointed out that the problem of synthesis of PC sequences is related to the fundamental problems of discrete mathematics and, first of all, to a number of combinatorial problems, which can be solved, as the number factorisation problem, by algebraic methods by using the theory of Galois fields and groups.

  3. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  4. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  5. Comparative Study of Advanced Turbulence Models for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Hadid, Ali H.; Sindir, Munir M.

    1996-01-01

    A computational study has been undertaken to study the performance of advanced phenomenological turbulence models coded in a modular form to describe incompressible turbulent flow behavior in two dimensional/axisymmetric and three dimensional complex geometry. The models include a variety of two equation models (single and multi-scale k-epsilon models with different near wall treatments) and second moment algebraic and full Reynolds stress closure models. These models were systematically assessed to evaluate their performance in complex flows with rotation, curvature and separation. The models are coded as self contained modules that can be interfaced with a number of flow solvers. These modules are stand alone satellite programs that come with their own formulation, finite-volume discretization scheme, solver and boundary condition implementation. They will take as input (from any generic Navier-Stokes solver) the velocity field, grid (structured H-type grid) and computational domain specification (boundary conditions), and will deliver, depending on the model used, turbulent viscosity, or the components of the Reynolds stress tensor. There are separate 2D/axisymmetric and/or 3D decks for each module considered. The modules are tested using Rocketdyn's proprietary code REACT. The code utilizes an efficient solution procedure to solve Navier-Stokes equations in a non-orthogonal body-fitted coordinate system. The differential equations are discretized over a finite-volume grid using a non-staggered variable arrangement and an efficient solution procedure based on the SIMPLE algorithm for the velocity-pressure coupling is used. The modules developed have been interfaced and tested using finite-volume, pressure-correction CFD solvers which are widely used in the CFD community. Other solvers can also be used to test these modules since they are independently structured with their own discretization scheme and solver methodology. Many of these modules have been independently tested by Professor C.P. Chen and his group at the University of Alabama at Huntsville (UAH) by interfacing them with own flow solver (MAST).

  6. City curfew ordinances and teenage motor vehicle injury.

    PubMed

    Preusser, D F; Williams, A F; Lund, A K; Zador, P L

    1990-08-01

    Several U.S. cities have curfew ordinances that limit the late night activities of minor teenagers in public places including highways. Detroit, Cleveland, and Columbus, which have curfew ordinances, were compared to Cincinnati, which does not have such an ordinance. The curfew ordinances were associated with a 23% reduction in motor vehicle related injury for 13- to 17-year-olds as passengers, drivers, pedestrians, or bicyclists during the curfew hours. It was concluded that city curfew ordinances, like the statewide driving curfews studied in other states, can reduce motor vehicle injury to teenagers during the particularly hazardous late night hours.

  7. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  8. On a Mathematical Theory of Coded Exposure

    DTIC Science & Technology

    2014-08-01

    formulae that give the MSE and SNR of the final crisp image 1. Assumes the Shannon-Whittaker framework that i) requires band limited (with a fre...represents the ideal crisp image, i.e., the image that one would observed if there were no noise whatsoever, no motion, with a perfect optical system...discrete. In addition, the image obtained by a coded exposure camera requires to undergo a deconvolution to get the final crisp image. Note that the

  9. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Turan, A.

    1985-01-01

    The hybrid-upwind finite difference schemes employed in generally available combustor codes possess excessive numerical diffusion errors which preclude accurate quantative calculations. The present study has as its primary objective the identification and assessment of an improved solution algorithm as well as discretization schemes applicable to analysis of turbulent viscous recirculating flows. The assessment is carried out primarily in two dimensional/axisymetric geometries with a view to identifying an appropriate technique to be incorporated in a three-dimensional code.

  10. Two centuries of masting data for European beech and Norway spruce across the European continent.

    PubMed

    Ascoli, Davide; Maringer, Janet; Hacket-Pain, Andy; Conedera, Marco; Drobyshev, Igor; Motta, Renzo; Cirolli, Mara; Kantorowicz, Władysław; Zang, Christian; Schueler, Silvio; Croisé, Luc; Piussi, Pietro; Berretti, Roberta; Palaghianu, Ciprian; Westergren, Marjana; Lageard, Jonathan G A; Burkart, Anton; Gehrig Bichsel, Regula; Thomas, Peter A; Beudert, Burkhard; Övergaard, Rolf; Vacchiano, Giorgio

    2017-05-01

    Tree masting is one of the most intensively studied ecological processes. It affects nutrient fluxes of trees, regeneration dynamics in forests, animal population densities, and ultimately influences ecosystem services. Despite a large volume of research focused on masting, its evolutionary ecology, spatial and temporal variability, and environmental drivers are still matter of debate. Understanding the proximate and ultimate causes of masting at broad spatial and temporal scales will enable us to predict tree reproductive strategies and their response to changing environment. Here we provide broad spatial (distribution range-wide) and temporal (century) masting data for the two main masting tree species in Europe, European beech (Fagus sylvatica L.) and Norway spruce (Picea abies (L.) H. Karst.). We collected masting data from a total of 359 sources through an extensive literature review and from unpublished surveys. The data set has a total of 1,747 series and 18,348 yearly observations from 28 countries and covering a time span of years 1677-2016 and 1791-2016 for beech and spruce, respectively. For each record, the following information is available: identification code; species; year of observation; proxy of masting (flower, pollen, fruit, seed, dendrochronological reconstructions); statistical data type (ordinal, continuous); data value; unit of measurement (only in case of continuous data); geographical location (country, Nomenclature of Units for Territorial Statistics NUTS-1 level, municipality, coordinates); first and last record year and related length; type of data source (field survey, peer reviewed scientific literature, gray literature, personal observation); source identification code; date when data were added to the database; comments. To provide a ready-to-use masting index we harmonized ordinal data into five classes. Furthermore, we computed an additional field where continuous series with length >4 yr where converted into a five classes ordinal index. To our knowledge, this is the most comprehensive published database on species-specific masting behavior. It is useful to study spatial and temporal patterns of masting and its proximate and ultimate causes, to refine studies based on tree-ring chronologies, to understand dynamics of animal species and pests vectored by these animals affecting human health, and it may serve as calibration-validation data for dynamic forest models. © 2017 by the Ecological Society of America.

  11. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  12. Dynamic simulations of geologic materials using combined FEM/DEM/SPH analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, J P; Johnson, S M

    2008-03-26

    An overview of the Lawrence Discrete Element Code (LDEC) is presented, and results from a study investigating the effect of explosive and impact loading on geologic materials using the Livermore Distinct Element Code (LDEC) are detailed. LDEC was initially developed to simulate tunnels and other structures in jointed rock masses using large numbers of polyhedral blocks. Many geophysical applications, such as projectile penetration into rock, concrete targets, and boulder fields, require a combination of continuum and discrete methods in order to predict the formation and interaction of the fragments produced. In an effort to model this class of problems, LDECmore » now includes implementations of Cosserat point theory and cohesive elements. This approach directly simulates the transition from continuum to discontinuum behavior, thereby allowing for dynamic fracture within a combined finite element/discrete element framework. In addition, there are many application involving geologic materials where fluid-structure interaction is important. To facilitate solution of this class of problems a Smooth Particle Hydrodynamics (SPH) capability has been incorporated into LDEC to simulate fully coupled systems involving geologic materials and a saturating fluid. We will present results from a study of a broad range of geomechanical problems that exercise the various components of LDEC in isolation and in tandem.« less

  13. 78 FR 54670 - Miami Tribe of Oklahoma-Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... Tribe of Oklahoma--Liquor Control Ordinance AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice. SUMMARY: This notice publishes the Miami Tribe of Oklahoma--Liquor Control Ordinance. This Ordinance... Oklahoma, increases the ability of the tribal government to control the distribution and possession of...

  14. Tax revenue in Mississippi communities following implementation of smoke-free ordinances: an examination of tourism and economic development tax revenues.

    PubMed

    McMillen, Robert; Shackelford, Signe

    2012-10-01

    There is no safe level of exposure to tobacco smoke. More than 60 Mississippi communities have passed smoke-free ordinances in the past six years. Opponents claim that these ordinances harm local businesses. Mississippi law allows municipalities to place a tourism and economic development (TED) tax on local restaurants and hotels/motels. The objective of this study is to examine the impact of these ordinances on TED tax revenues. This study applies a pre/post quasi-experimental design to compare TED tax revenue before and after implementing ordinances. Descriptive analyses indicated that inflation-adjusted tax revenues increased during the 12 months following implementation of smoke-free ordinances while there was no change in aggregated control communities. Multivariate fixed-effects analyses found no statistically significant effect of smoke-free ordinances on hospitality tax revenue. No evidence was found that smoke-free ordinances have an adverse effect on the local hospitality industry.

  15. Recession, debt and mental health: challenges and solutions

    PubMed Central

    2009-01-01

    Background During the economic downturn, the link between recession and health has featured in many countries' media, political, and medical debate. This paper focuses on the previously neglected relationship between personal debt and mental health. Aims Using the UK as a case study, this paper considers the public health challenges presented by debt to mental health. We then propose solutions identified in workshops held during the UK Government's Foresight Review of Mental Capital and Wellbeing. Results Within their respective sectors, health professionals should receive basic ‘debt first aid’ training, whilst all UK financial sector codes of practice should – as a minimum – recognise the existence of customers with mental health problems. Further longitudinal research is also needed to ‘unpack’ the relationship between debt and mental health. Across sectors, a lack of co-ordinated activity across health, money advice, and creditor organisations remains a weakness. A renewed emphasis on co-ordinated ‘debt care pathways’ and better communication between local health and advice services is needed. Discussion The relationship between debt and mental health presents a contemporary public health challenge. Solutions exist, but will require action and investment at a time of competition for funds. PMID:22477896

  16. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  17. Stencil computations for PDE-based applications with examples from DUNE and hypre

    DOE PAGES

    Engwer, C.; Falgout, R. D.; Yang, U. M.

    2017-02-24

    Here, stencils are commonly used to implement efficient on–the–fly computations of linear operators arising from partial differential equations. At the same time the term “stencil” is not fully defined and can be interpreted differently depending on the application domain and the background of the software developers. Common features in stencil codes are the preservation of the structure given by the discretization of the partial differential equation and the benefit of minimal data storage. We discuss stencil concepts of different complexity, show how they are used in modern software packages like hypre and DUNE, and discuss recent efforts to extend themore » software to enable stencil computations of more complex problems and methods such as inf–sup–stable Stokes discretizations and mixed finite element discretizations.« less

  18. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  19. Penal Code (Ordinance No. 12 of 1983), 1 July 1984.

    PubMed

    1987-01-01

    This document contains provisions of the 1984 Penal Code of Montserrat relating to sexual offenses, abortion, offenses relating to marriage, homicide and other offenses against the person, and neglect endangering life or health. Part 8 of the Code holds that a man found guilty of raping a woman is liable to life imprisonment. Rape is deemed to involve unlawful (extramarital) sexual intercourse with a woman without her consent (this is determined if the rape involved force, threats, administration of drugs, or false representation). The Code also defines offenses in cases of incest, child abuse, prostitution, abduction, controlling the actions and finances of a prostitute, and having unlawful sexual intercourse with a mentally defective woman. Part 9 of the Code outlaws abortion unless it is conducted in an approved establishment after two medical practitioners have determined that continuing the pregnancy would risk the life or physical/mental health of the pregnant woman or if a substantial risk exists that the child would have serious abnormalities. Part 10 outlaws bigamy, and part 12 holds that infanticide performed by a mother suffering postpartum imbalances can be prosecuted as manslaughter. This part also outlaws concealment of the body of a newborn, whether that child died before, at, or after birth, and aggravated assault on any child not more than 14 years old. Part 12 makes it an offense to subject any child to neglect endangering its life or health.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bdzil, John Bohdan

    The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver,more » and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-­resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-­set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-­supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-­tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-­set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.« less

  1. 75 FR 65373 - Klamath Tribes Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Klamath Tribes Liquor Control Ordinance AGENCY... certification of the amendment to the Klamath Tribes Liquor Control Ordinance. The first Ordinance was published... and controls the sale, possession and distribution of liquor within the tribal lands. The tribal lands...

  2. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  3. Ordinality and the nature of symbolic numbers.

    PubMed

    Lyons, Ian M; Beilock, Sian L

    2013-10-23

    The view that representations of symbolic and nonsymbolic numbers are closely tied to one another is widespread. However, the link between symbolic and nonsymbolic numbers is almost always inferred from cardinal processing tasks. In the current work, we show that considering ordinality instead points to striking differences between symbolic and nonsymbolic numbers. Human behavioral and neural data show that ordinal processing of symbolic numbers (Are three Indo-Arabic numerals in numerical order?) is distinct from symbolic cardinal processing (Which of two numerals represents the greater quantity?) and nonsymbolic number processing (ordinal and cardinal judgments of dot-arrays). Behaviorally, distance-effects were reversed when assessing ordinality in symbolic numbers, but canonical distance-effects were observed for cardinal judgments of symbolic numbers and all nonsymbolic judgments. At the neural level, symbolic number-ordering was the only numerical task that did not show number-specific activity (greater than control) in the intraparietal sulcus. Only activity in left premotor cortex was specifically associated with symbolic number-ordering. For nonsymbolic numbers, activation in cognitive-control areas during ordinal processing and a high degree of overlap between ordinal and cardinal processing networks indicate that nonsymbolic ordinality is assessed via iterative cardinality judgments. This contrasts with a striking lack of neural overlap between ordinal and cardinal judgments anywhere in the brain for symbolic numbers, suggesting that symbolic number processing varies substantially with computational context. Ordinal processing sheds light on key differences between symbolic and nonsymbolic number processing both behaviorally and in the brain. Ordinality may prove important for understanding the power of representing numbers symbolically.

  4. Elementary dispersion analysis of some mimetic discretizations on triangular C-grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow

    2017-02-01

    Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less

  5. Social Host Ordinances and Policies. Prevention Update

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2011

    2011-01-01

    Social host liability laws (also known as teen party ordinances, loud or unruly gathering ordinances, or response costs ordinances) target the location in which underage drinking takes place. Social host liability laws hold noncommercial individuals responsible for underage drinking events on property they own, lease, or otherwise control. They…

  6. 25 CFR 522.8 - Publication of class III ordinance and approval.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 522.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.8 Publication of class III ordinance and approval. The Chairman shall publish a class III tribal gaming...

  7. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and published ordinances. 478.24 Section 478.24 Alcohol, Tobacco Products, and Firearms BUREAU OF... published ordinances. (a) The Director shall annually revise and furnish Federal firearms licensees with a compilation of State laws and published ordinances which are relevant to the enforcement of this part. The...

  8. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment*†

    PubMed Central

    Khan, Md. Ashfaquzzaman; Herbordt, Martin C.

    2011-01-01

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations. PMID:21822327

  9. Parallel Discrete Molecular Dynamics Simulation With Speculation and In-Order Commitment.

    PubMed

    Khan, Md Ashfaquzzaman; Herbordt, Martin C

    2011-07-20

    Discrete molecular dynamics simulation (DMD) uses simplified and discretized models enabling simulations to advance by event rather than by timestep. DMD is an instance of discrete event simulation and so is difficult to scale: even in this multi-core era, all reported DMD codes are serial. In this paper we discuss the inherent difficulties of scaling DMD and present our method of parallelizing DMD through event-based decomposition. Our method is microarchitecture inspired: speculative processing of events exposes parallelism, while in-order commitment ensures correctness. We analyze the potential of this parallelization method for shared-memory multiprocessors. Achieving scalability required extensive experimentation with scheduling and synchronization methods to mitigate serialization. The speed-up achieved for a variety of system sizes and complexities is nearly 6× on an 8-core and over 9× on a 12-core processor. We present and verify analytical models that account for the achieved performance as a function of available concurrency and architectural limitations.

  10. Improved Discretization of Grounding Lines and Calving Fronts using an Embedded-Boundary Approach in BISICLES

    NASA Astrophysics Data System (ADS)

    Martin, D. F.; Cornford, S. L.; Schwartz, P.; Bhalla, A.; Johansen, H.; Ng, E.

    2017-12-01

    Correctly representing grounding line and calving-front dynamics is of fundamental importance in modeling marine ice sheets, since the configuration of these interfaces exerts a controlling influence on the dynamics of the ice sheet. Traditional ice sheet models have struggled to correctly represent these regions without very high spatial resolution. We have developed a front-tracking discretization for grounding lines and calving fronts based on the Chombo embedded-boundary cut-cell framework. This promises better representation of these interfaces vs. a traditional stair-step discretization on Cartesian meshes like those currently used in the block-structured AMR BISICLES code. The dynamic adaptivity of the BISICLES model complements the subgrid-scale discretizations of this scheme, producing a robust approach for tracking the evolution of these interfaces. Also, the fundamental discontinuous nature of flow across grounding lines is respected by mathematically treating it as a material phase change. We present examples of this approach to demonstrate its effectiveness.

  11. Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.

  12. Some practical universal noiseless coding techniques

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1979-01-01

    Some practical adaptive techniques for the efficient noiseless coding of a broad class of such data sources are developed and analyzed. Algorithms are designed for coding discrete memoryless sources which have a known symbol probability ordering but unknown probability values. A general applicability of these algorithms to solving practical problems is obtained because most real data sources can be simply transformed into this form by appropriate preprocessing. These algorithms have exhibited performance only slightly above all entropy values when applied to real data with stationary characteristics over the measurement span. Performance considerably under a measured average data entropy may be observed when data characteristics are changing over the measurement span.

  13. The use of interleaving for reducing radio loss in trellis-coded modulation systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1989-01-01

    It is demonstrated how the use of interleaving/deinterleaving in trellis-coded modulation (TCM) systems can reduce the signal-to-noise ratio loss due to imperfect carrier demodulation references. Both the discrete carrier (phase-locked loop) and suppressed carrier (Costas loop) cases are considered and the differences between the two are clearly demonstrated by numerical results. These results are of great importance for future communication links to the Deep Space Network (DSN), especially from high Earth orbiters, which may be bandwidth limited.

  14. Digital visual communications using a Perceptual Components Architecture

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1991-01-01

    The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.

  15. Curvature and tangential deflection of discrete arcs: a theory based on the commutator of scatter matrix pairs and its application to vertex detection in planar shape data.

    PubMed

    Anderson, I M; Bezdek, J C

    1984-01-01

    This paper introduces a new theory for the tangential deflection and curvature of plane discrete curves. Our theory applies to discrete data in either rectangular boundary coordinate or chain coded formats: its rationale is drawn from the statistical and geometric properties associated with the eigenvalue-eigenvector structure of sample covariance matrices. Specifically, we prove that the nonzero entry of the commutator of a piar of scatter matrices constructed from discrete arcs is related to the angle between their eigenspaces. And further, we show that this entry is-in certain limiting cases-also proportional to the analytical curvature of the plane curve from which the discrete data are drawn. These results lend a sound theoretical basis to the notions of discrete curvature and tangential deflection; and moreover, they provide a means for computationally efficient implementation of algorithms which use these ideas in various image processing contexts. As a concrete example, we develop the commutator vertex detection (CVD) algorithm, which identifies the location of vertices in shape data based on excessive cummulative tangential deflection; and we compare its performance to several well established corner detectors that utilize the alternative strategy of finding (approximate) curvature extrema.

  16. The effect of spatial discretization upon traveling wave body forcing of a turbulent wall-bounded flow

    NASA Astrophysics Data System (ADS)

    You, Soyoung; Goldstein, David

    2015-11-01

    DNS is employed to simulate turbulent channel flow subject to a traveling wave body force field near the wall. The regions in which forces are applied are made progressively more discrete in a sequence of simulations to explore the boundaries between the effects of discrete flow actuators and spatially continuum actuation. The continuum body force field is designed to correspond to the ``optimal'' resolvent mode of McKeon and Sharma (2010), which has the L2 norm of σ1. That is, the normalized harmonic forcing that gives the largest disturbance energy is the first singular mode with the gain of σ1. 2D and 3D resolvent modes are examined at a modest Reτ of 180. For code validation, nominal flow simulations without discretized forcing are compared to previous work by Sharma and Goldstein (2014) in which we find that as we increase the forcing amplitude there is a decrease in the mean velocity and an increase in turbulent kinetic energy. The same force field is then sampled into isolated sub-domains to emulate the effect of discrete physical actuators. Several cases will be presented to explore the dependencies between the level of discretization and the turbulent flow behavior.

  17. Three-dimensional forward modeling and inversion of marine CSEM data in anisotropic conductivity structures

    NASA Astrophysics Data System (ADS)

    Han, B.; Li, Y.

    2016-12-01

    We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.

  18. Using ordinal partition transition networks to analyze ECG data

    NASA Astrophysics Data System (ADS)

    Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.

    2016-07-01

    Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.

  19. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models

    ERIC Educational Resources Information Center

    Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao

    2010-01-01

    Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…

  20. 75 FR 51102 - Liquor Ordinance of the Wichita and Affiliated Tribes; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... Tribes; Correction AGENCY: Bureau of Indian Affairs, Interior ACTION: Notice; correction SUMMARY: The... Liquor Ordinance of the Wichita and Affiliated Tribes. The notice refers to an amended ordinance of the Wichita and Affiliated Tribes when in fact the Liquor Ordinance adopted by Resolution No. WT-10-31 on May...

  1. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    ERIC Educational Resources Information Center

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  2. The effect of ordinances requiring smoke-free restaurants on restaurant sales.

    PubMed Central

    Glantz, S A; Smith, L R

    1994-01-01

    OBJECTIVES: The effect on restaurant revenues of local ordinances requiring smoke-free restaurants is an important consideration for restauranteurs themselves and the cities that depend on sales tax revenues to provide services. METHODS: Data were obtained from the California State Board of Equalization and Colorado State Department of Revenue on taxable restaurant sales from 1986 (1982 for Aspen) through 1993 for all 15 cities where ordinances were in force, as well as for 15 similar control communities without smoke-free ordinances during this period. These data were analyzed using multiple regression, including time and a dummy variable for whether an ordinance was in force. Total restaurant sales were analyzed as a fraction of total retail sales and restaurant sales in smoke-free cities vs the comparison cities similar in population, median income, and other factors. RESULTS. Ordinances had no significant effect on the fraction of total retail sales that went to restaurants or on the ratio of restaurant sales in communities with ordinances compared with those in the matched control communities. CONCLUSIONS. Smoke-free restaurant ordinances do not adversely affect restaurant sales. PMID:8017529

  3. Ordinal measures for iris recognition.

    PubMed

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  4. Food marketing to children through toys: response of restaurants to the first U.S. toy ordinance.

    PubMed

    Otten, Jennifer J; Hekler, Eric B; Krukowski, Rebecca A; Buman, Matthew P; Saelens, Brian E; Gardner, Christopher D; King, Abby C

    2012-01-01

    On August 9, 2010, Santa Clara County CA became the first U.S. jurisdiction to implement an ordinance that prohibits the distribution of toys and other incentives to children in conjunction with meals, foods, or beverages that do not meet minimal nutritional criteria. Restaurants had many different options for complying with this ordinance, such as introducing more healthful menu options, reformulating current menu items, or changing marketing or toy distribution practices. To assess how ordinance-affected restaurants changed their child menus, marketing, and toy distribution practices relative to non-affected restaurants. Children's menu items and child-directed marketing and toy distribution practices were examined before and at two time points after ordinance implementation (from July through November 2010) at ordinance-affected fast-food restaurants compared with demographically matched unaffected same-chain restaurants using the Children's Menu Assessment tool. Affected restaurants showed a 2.8- to 3.4-fold improvement in Children's Menu Assessment scores from pre- to post-ordinance with minimal changes at unaffected restaurants. Response to the ordinance varied by restaurant. Improvements were seen in on-site nutritional guidance; promotion of healthy meals, beverages, and side items; and toy marketing and distribution activities. The ordinance appears to have positively influenced marketing of healthful menu items and toys as well as toy distribution practices at ordinance-affected restaurants, but did not affect the number of healthful food items offered. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  5. Radiative transfer simulations of the two-dimensional ocean glint reflectance and determination of the sea surface roughness.

    PubMed

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-02-20

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  6. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  7. Radiative Transfer Simulations of the Two-Dimensional Ocean Glint Reflectance and Determination of the Sea Surface Roughness

    NASA Technical Reports Server (NTRS)

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-01-01

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  8. Non-destructive testing of ceramic materials using mid-infrared ultrashort-pulse laser

    NASA Astrophysics Data System (ADS)

    Sun, S. C.; Qi, Hong; An, X. Y.; Ren, Y. T.; Qiao, Y. B.; Ruan, Liming M.

    2018-04-01

    The non-destructive testing (NDT) of ceramic materials using mid-infrared ultrashort-pulse laser is investigated in this study. The discrete ordinate method is applied to solve the transient radiative transfer equation in 2D semitransparent medium and the emerging radiative intensity on boundary serves as input for the inverse analysis. The sequential quadratic programming algorithm is employed as the inverse technique to optimize objective function, in which the gradient of objective function with respect to reconstruction parameters is calculated using the adjoint model. Two reticulated porous ceramics including partially stabilized zirconia and oxide-bonded silicon carbide are tested. The retrieval results show that the main characteristics of defects such as optical properties, geometric shapes and positions can be accurately reconstructed by the present model. The proposed technique is effective and robust in NDT of ceramics even with measurement errors.

  9. Generalized Fokker-Planck theory for electron and photon transport in biological tissues: application to radiotherapy.

    PubMed

    Olbrant, Edgar; Frank, Martin

    2010-12-01

    In this paper, we study a deterministic method for particle transport in biological tissues. The method is specifically developed for dose calculations in cancer therapy and for radiological imaging. Generalized Fokker-Planck (GFP) theory [Leakeas and Larsen, Nucl. Sci. Eng. 137 (2001), pp. 236-250] has been developed to improve the Fokker-Planck (FP) equation in cases where scattering is forward-peaked and where there is a sufficient amount of large-angle scattering. We compare grid-based numerical solutions to FP and GFP in realistic medical applications. First, electron dose calculations in heterogeneous parts of the human body are performed. Therefore, accurate electron scattering cross sections are included and their incorporation into our model is extensively described. Second, we solve GFP approximations of the radiative transport equation to investigate reflectance and transmittance of light in biological tissues. All results are compared with either Monte Carlo or discrete-ordinates transport solutions.

  10. Delta Clipper-Experimental In-Ground Effect on Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1998-01-01

    A quasitransient in-ground effect method is developed to study the effect of vertical landing on a launch vehicle base-heating environment. This computational methodology is based on a three-dimensional, pressure-based, viscous flow, chemically reacting, computational fluid dynamics formulation. Important in-ground base-flow physics such as the fountain-jet formation, plume growth, air entrainment, and plume afterburning are captured with the present methodology. Convective and radiative base-heat fluxes are computed for comparison with those of a flight test. The influence of the laminar Prandtl number on the convective heat flux is included in this study. A radiative direction-dependency test is conducted using both the discrete ordinate and finite volume methods. Treatment of the plume afterburning is found to be very important for accurate prediction of the base-heat fluxes. Convective and radiative base-heat fluxes predicted by the model using a finite rate chemistry option compared reasonably well with flight-test data.

  11. Assessment and validation of the community radiative transfer model for ice cloud conditions

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua

    2014-11-01

    The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.

  12. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  13. Unsteady Propeller Hydrodynamics

    DTIC Science & Technology

    2001-06-01

    coupling routines, making the code more robust while decreasing the computation burden over currect methods. Finally, a higher order quadratic influence ... function technique was implemented within the wake to more accurately define the induction velocity at the trailing edge which has suffered in the past due to lack of discretization.

  14. Investigation of Advanced Counterrotation Blade Configuration Concepts for High Speed Turboprop Systems. Task 2: Unsteady Ducted Propfan Analysis

    NASA Technical Reports Server (NTRS)

    Hall, Edward J.; Delaney, Robert A.; Bettner, James L.

    1991-01-01

    The primary objective was the development of a time dependent 3-D Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The resulting computer codes are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). A computer program user's manual is presented for the ADPAC. Aerodynamic calculations were based on a four stage Runge-Kutta time marching finite volume solution technique with added numerical dissipation. A time accurate implicit residual smoothing operator was used for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted flows.

  15. 25 CFR 522.7 - Disapproval of a class III ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class III ordinance. 522.7 Section 522.7 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.7 Disapproval of a class III...

  16. 25 CFR 522.5 - Disapproval of a class II ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class II ordinance. 522.5 Section 522.5 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.5 Disapproval of a class II...

  17. Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis with Ordinal Data

    ERIC Educational Resources Information Center

    Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C.

    2012-01-01

    Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…

  18. Local Area Co-Ordination: Strengthening Support for People with Learning Disabilities in Scotland

    ERIC Educational Resources Information Center

    Stalker, Kirsten Ogilvie; Malloch, Margaret; Barry, Monica Anne; Watson, June Ann

    2008-01-01

    This paper reports the findings of a study commissioned by the Scottish Executive which examined the introduction and implementation of local area co-ordination (LAC) in Scotland. A questionnaire about their posts was completed by 44 local area co-ordinators, interviews were conducted with 35 local area co-ordinators and 14 managers and case…

  19. Multiscale modeling of dislocation-precipitate interactions in Fe: From molecular dynamics to discrete dislocations.

    PubMed

    Lehtinen, Arttu; Granberg, Fredric; Laurson, Lasse; Nordlund, Kai; Alava, Mikko J

    2016-01-01

    The stress-driven motion of dislocations in crystalline solids, and thus the ensuing plastic deformation process, is greatly influenced by the presence or absence of various pointlike defects such as precipitates or solute atoms. These defects act as obstacles for dislocation motion and hence affect the mechanical properties of the material. Here we combine molecular dynamics studies with three-dimensional discrete dislocation dynamics simulations in order to model the interaction between different kinds of precipitates and a 1/2〈111〉{110} edge dislocation in BCC iron. We have implemented immobile spherical precipitates into the ParaDis discrete dislocation dynamics code, with the dislocations interacting with the precipitates via a Gaussian potential, generating a normal force acting on the dislocation segments. The parameters used in the discrete dislocation dynamics simulations for the precipitate potential, the dislocation mobility, shear modulus, and dislocation core energy are obtained from molecular dynamics simulations. We compare the critical stresses needed to unpin the dislocation from the precipitate in molecular dynamics and discrete dislocation dynamics simulations in order to fit the two methods together and discuss the variety of the relevant pinning and depinning mechanisms.

  20. Discrete Adjoint-Based Design Optimization of Unsteady Turbulent Flows on Dynamic Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Diskin, Boris; Yamaleev, Nail K.

    2009-01-01

    An adjoint-based methodology for design optimization of unsteady turbulent flows on dynamic unstructured grids is described. The implementation relies on an existing unsteady three-dimensional unstructured grid solver capable of dynamic mesh simulations and discrete adjoint capabilities previously developed for steady flows. The discrete equations for the primal and adjoint systems are presented for the backward-difference family of time-integration schemes on both static and dynamic grids. The consistency of sensitivity derivatives is established via comparisons with complex-variable computations. The current work is believed to be the first verified implementation of an adjoint-based optimization methodology for the true time-dependent formulation of the Navier-Stokes equations in a practical computational code. Large-scale shape optimizations are demonstrated for turbulent flows over a tiltrotor geometry and a simulated aeroelastic motion of a fighter jet.

  1. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  2. [Status of law-making on animal welfare].

    PubMed

    Polten, B

    2007-03-01

    Since the last report there have been major revisions of laws and ordinances. Deliberations on rules of Community law were also continued. On national level, the Act on the Shoeing of Horses amending the Animal Welfare Act and amendments of animal welfare provisions as well as the Deregulation Act were prepared, some of which have meanwhile entered into force. At legislative level, the work on the ratification laws for the Council of Europe conventions (Strasbourg) was concluded in order to enable Germany to adopt the revisions. They include (1) the European Convention for the protection of animals used for experimental purposes and (2) the European Convention for the protection of animals during international transport. At the level of ordinances, the amendment and extension of the Animal Welfare -Farm Animal Husbandry Ordinance are of vital importance for the sections on pig farming and laying hen husbandry. Another section refers to the husbandry of fur animals, on which an ordinance has been submitted to the Bundesrat (German upper house of Parliament). Deliberations on this issue have been adjourned. Drafts of a circus register were prepared to amend the Animal Welfare Act and to adopt a separate ordinance, and they are being discussed with the federal states and associations. Previously,the rules of Community law in the area of animal welfare were adopted as EC directives which the member states had to transfer in national law. This was done by incorporating them into national laws or ordinances, with non-compliance having to be sanctioned. It is the member states' responsibility to establish sanctions. Yet the Commission has introduced a directly operative animal welfare legislation by adopting EC Regulation 1/2005 on the protection of animals during transport. This means that a national implementation is not required. Nevertheless, the establishment of sanctions continues to be the responsibility of the member states. A special authorisation by the legislator is required to be able to impose sanctions based on directly applicable EC law. This is done via the already mentioned Act on the Shoeing of Horses and amendment. To establish sanctions for this Community legislation, a "Sanctions Ordinance" is currently being discussed by the different departments. This way, a link between directly applicable Community legislation and national sanctions is established. At EC level there are currently discussed (1) the "Animal Welfare Action Plan", (2) a draft directive laying down minimum rules for the protection of chickens kept for meat production and (3) preparations for a revision of the directive on the protection of animals used for experimental purposes have become known due to the preparation of a related impact assessment. At the level of international law, the Council of Europe has concluded its work on Annex A of the convention for the protection of animals used for experimental purposes. With regard to the European Convention for the protection of animals kept for farming purposes, the deliberations on fish and fattening rabbits are being continued. There is a discussion on the technical details of the Transport Convention. Since the first animal welfare conference of the International Office of epizootics (OlE) in February 2004 in Paris, two very comprehensive codes on slaughter of animals and on animal transport were adopted. The inclusion of further animal welfare issues into the OIE work programme will be discussed in the next future.

  3. Tourism and hotel revenues before and after passage of smoke-free restaurant ordinances.

    PubMed

    Glantz, S A; Charlesworth, A

    1999-05-26

    Claims that ordinances requiring smoke-free restaurants will adversely affect tourism have been used to argue against passing such ordinances. Data exist regarding the validity of these claims. To determine the changes in hotel revenues and international tourism after passage of smoke-free restaurant ordinances in locales where the effect has been debated. Comparison of hotel revenues and tourism rates before and after passage of 100% smoke-free restaurant ordinances and comparison with US hotel revenue overall. Three states (California, Utah, and Vermont) and 6 cities (Boulder, Colo; Flagstaff, Ariz; Los Angeles, Calif; Mesa, Ariz; New York, NY; and San Francisco, Calif) in which the effect on tourism of smoke-free restaurant ordinances had been debated. Hotel room revenues and hotel revenues as a fraction of total retail sales compared with preordinance revenues and overall US revenues. In constant 1997 dollars, passage of the smoke-free restaurant ordinance was associated with a statistically significant increase in the rate of change of hotel revenues in 4 localities, no significant change in 4 localities, and a significant slowing in the rate of increase (but not a decrease) in 1 locality. There was no significant change in the rate of change of hotel revenues as a fraction of total retail sales (P=.16) or total US hotel revenues associated with the ordinances when pooled across all localities (P = .93). International tourism was either unaffected or increased following implementation of the smoke-free ordinances. Smoke-free ordinances do not appear to adversely affect, and may increase, tourist business.

  4. Overview of Edge Simulation Laboratory (ESL)

    NASA Astrophysics Data System (ADS)

    Cohen, R. H.; Dorr, M.; Hittinger, J.; Rognlien, T.; Umansky, M.; Xiong, A.; Xu, X.; Belli, E.; Candy, J.; Snyder, P.; Colella, P.; Martin, D.; Sternberg, T.; van Straalen, B.; Bodi, K.; Krasheninnikov, S.

    2006-10-01

    The ESL is a new collaboration to build a full-f electromagnetic gyrokinetic code for tokamak edge plasmas using continuum methods. Target applications are edge turbulence and transport (neoclassical and anomalous), and edge-localized modes. Initially the project has three major threads: (i) verification and validation of TEMPEST, the project's initial (electrostatic) edge code which can be run in 4D (neoclassical and transport-timescale applications) or 5D (turbulence); (ii) design of the next generation code, which will include more complete physics (electromagnetics, fluid equation option, improved collisions) and advanced numerics (fully conservative, high-order discretization, mapped multiblock grids, adaptivity), and (iii) rapid-prototype codes to explore the issues attached to solving fully nonlinear gyrokinetics with steep radial gradiens. We present a brief summary of the status of each of these activities.

  5. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  6. Survey of local forestry-related ordinances and regulations in the south

    Treesearch

    Jonathan J. Spink; Karry L. Haney; John L. Greene

    2000-01-01

    A survey of the 13 southern states was conducted in 1999-2000 to obtain a comprehensive list of forestry-related ordinances enacted by various local governments. Each ordinance was examined to determine the date of adoption, regulatory objective, and its regu1atory provisions. Based on the regulatory objective, the ordinances were categorized into five general types:...

  7. Knowledge of the ordinal position of list items in pigeons.

    PubMed

    Scarf, Damian; Colombo, Michael

    2011-10-01

    Ordinal knowledge is a fundamental aspect of advanced cognition. It is self-evident that humans represent ordinal knowledge, and over the past 20 years it has become clear that nonhuman primates share this ability. In contrast, evidence that nonprimate species represent ordinal knowledge is missing from the comparative literature. To address this issue, in the present experiment we trained pigeons on three 4-item lists and then tested them with derived lists in which, relative to the training lists, the ordinal position of the items was either maintained or changed. Similar to the findings with human and nonhuman primates, our pigeons performed markedly better on the maintained lists compared to the changed lists, and displayed errors consistent with the view that they used their knowledge of ordinal position to guide responding on the derived lists. These findings demonstrate that the ability to acquire ordinal knowledge is not unique to the primate lineage. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  8. A numerical simulation of the full two-dimensional electrothermal de-icer pad. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Masiulaniec, Konstanty C.

    1988-01-01

    The ability to predict the time-temperature history of electrothermal de-icer pads is important in the subsequent design of improved and more efficient versions. These de-icer pads are installed near the surface of aircraft components, for the specific purpose of removing accreted ice. The proposed numerical model can incorporate the full 2-D geometry through a section of a region (i.e., section of an airfoil), that current 1-D numerical codes are unable to do. Thus, the effects of irregular layers, curvature, etc., can now be accounted for in the thermal transients. Each layer in the actual geometry is mapped via a body-fitted coordinate transformation into uniform, rectangular computational grids. The relevant heat transfer equations are transformed and discretized. To model the phase change that might occur in any accreted ice, in an enthalpy formulation the phase change equations are likewise transformed and discretized. The code developed was tested against numerous classical numerical solutions, as well as against experimental de-icing data on a UH1H rotor blade obtained from the NASA Lewis Research Center. The excellent comparisons obtained show that this code can be a useful tool in predicting the performance of current de-icer models, as well as in the designing of future models.

  9. : A Scalable and Transparent System for Simulating MPI Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S

    2010-01-01

    is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less

  10. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Numerical study of the effects of lamp configuration and reactor wall roughness in an open channel water disinfection UV reactor.

    PubMed

    Sultan, Tipu

    2016-07-01

    This article describes the assessment of a numerical procedure used to determine the UV lamp configuration and surface roughness effects on an open channel water disinfection UV reactor. The performance of the open channel water disinfection UV reactor was numerically analyzed on the basis of the performance indictor reduction equivalent dose (RED). The RED values were calculated as a function of the Reynolds number to monitor the performance. The flow through the open channel UV reactor was modelled using a k-ε model with scalable wall function, a discrete ordinate (DO) model for fluence rate calculation, a volume of fluid (VOF) model to locate the unknown free surface, a discrete phase model (DPM) to track the pathogen transport, and a modified law of the wall to incorporate the reactor wall roughness effects. The performance analysis was carried out using commercial CFD software (ANSYS Fluent 15.0). Four case studies were analyzed based on open channel UV reactor type (horizontal and vertical) and lamp configuration (parallel and staggered). The results show that lamp configuration can play an important role in the performance of an open channel water disinfection UV reactor. The effects of the reactor wall roughness were Reynolds number dependent. The proposed methodology is useful for performance optimization of an open channel water disinfection UV reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  13. Competitive region orientation code for palmprint verification and identification

    NASA Astrophysics Data System (ADS)

    Tang, Wenliang

    2015-11-01

    Orientation features of the palmprint have been widely investigated in coding-based palmprint-recognition methods. Conventional orientation-based coding methods usually used discrete filters to extract the orientation feature of palmprint. However, in real operations, the orientations of the filter usually are not consistent with the lines of the palmprint. We thus propose a competitive region orientation-based coding method. Furthermore, an effective weighted balance scheme is proposed to improve the accuracy of the extracted region orientation. Compared with conventional methods, the region orientation of the palmprint extracted using the proposed method can precisely and robustly describe the orientation feature of the palmprint. Extensive experiments on the baseline PolyU and multispectral palmprint databases are performed and the results show that the proposed method achieves a promising performance in comparison to conventional state-of-the-art orientation-based coding methods in both palmprint verification and identification.

  14. Wing Weight Optimization Under Aeroelastic Loads Subject to Stress Constraints

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Issac, J.; Macmurdy, D.; Guruswamy, Guru P.

    1997-01-01

    A minimum weight optimization of the wing under aeroelastic loads subject to stress constraints is carried out. The loads for the optimization are based on aeroelastic trim. The design variables are the thickness of the wing skins and planform variables. The composite plate structural model incorporates first-order shear deformation theory, the wing deflections are expressed using Chebyshev polynomials and a Rayleigh-Ritz procedure is adopted for the structural formulation. The aerodynamic pressures provided by the aerodynamic code at a discrete number of grid points is represented as a bilinear distribution on the composite plate code to solve for the deflections and stresses in the wing. The lifting-surface aerodynamic code FAST is presently being used to generate the pressure distribution over the wing. The envisioned ENSAERO/Plate is an aeroelastic analysis code which combines ENSAERO version 3.0 (for analysis of wing-body configurations) with the composite plate code.

  15. Toward performance portability of the Albany finite element analysis code using the Kokkos library

    DOE PAGES

    Demeshko, Irina; Watkins, Jerry; Tezaur, Irina K.; ...

    2018-02-05

    Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to be executed correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The finite element method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industrial applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library. We presentmore » performance results for the Aeras global atmosphere dynamical core module in Albany. Finally, numerical experiments show that our single code implementation gives reasonable performance across three multicore/many-core architectures: NVIDIA General Processing Units (GPU’s), Intel Xeon Phis, and multicore CPUs.« less

  16. Satellite Remote Sensing of Tropical Precipitation and Ice Clouds for GCM Verification

    NASA Technical Reports Server (NTRS)

    Evans, K. Franklin

    2001-01-01

    This project, supported by the NASA New Investigator Program, has primarily been funding a graduate student, Darren McKague. Since August 1999 Darren has been working part time at Raytheon, while continuing his PhD research. Darren is planning to finish his thesis work in May 2001, thus some of the work described here is ongoing. The proposed research was to use GOES visible and infrared imager data and SSM/I microwave data to obtain joint distributions of cirrus cloud ice mass and precipitation for a study region in the Eastern Tropical Pacific. These joint distributions of cirrus cloud and rainfall were to be compared to those from the CSU general circulation model to evaluate the cloud microphysical amd cumulus parameterizations in the GCM. Existing algorithms were to be used for the retrieval of cloud ice water path from GOES (Minnis) and rainfall from SSM/I (Wilheit). A theoretical study using radiative transfer models and realistic variations in cloud and precipitation profiles was to be used to estimate the retrieval errors. Due to the unavailability of the GOES satellite cloud retrieval algorithm from Dr. Minnis (a co-PI), there was a change in the approach and emphasis of the project. The new approach was to develop a completely new type of remote sensing algorithm - one to directly retrieve joint probability density functions (pdf's) of cloud properties from multi-dimensional histograms of satellite radiances. The usual approach is to retrieve individual pixels of variables (i.e. cloud optical depth), and then aggregate the information. Only statistical information is actually needed, however, and so a more direct method is desirable. We developed forward radiative transfer models for the SSM/I and GOES channels, originally for testing the retrieval algorithms. The visible and near infrared ice scattering information is obtained from geometric ray tracing of fractal ice crystals (Andreas Macke), while the mid-infrared and microwave scattering is computed with Mie scattering. The radiative transfer is performed with the Spherical Harmonic Discrete Ordinate Method (developed by the PI), and infrared molecular absorption is included with the correlated k-distribution method. The SHDOM radiances have been validated by comparison to version 2 of DISORT (the community "standard" discrete-ordinates radiative transfer model), however we use SHDOM since it is computationally more efficient.

  17. [Ordinance No. 91-240 of 25 February 1991 relating to the Labor Code applicable to the Territory of Mayotte].

    PubMed

    1991-03-06

    This Law sets forth the Labor Code for the French territory of Mayotte. The Code contains the following provisions relating to sex discrimination, maternity leave, night work, and the employment of foreigners: a) employers are prohibited from discriminating against women who are pregnant; b) women are entitled to fully paid maternity leave of 14 weeks, 8 weeks before and 6 weeks after giving birth; c) the employer and the Government will each pay for half of the worker's salary during this leave; d) discrimination on the basis of sex or family situation is prohibited in advertisements, offers of employment, hiring, firing, pay, training, job classification, and promotion; e) retaliation for instituting an action for sex discrimination is prohibited; f) men and women are guaranteed equal pay for equal work or work of an equivalent value; g) women may not perform work at night in factories, mines, building sites, workshops, public or ministerial offices, places of professional work, companies, unions, or associations of any sort, unless they are in management positions; and h) a foreigner may not engage in a professional activity in Mayotte without authorization. The Law prescribes penalties for violations of these provisions.

  18. Urban Runoff: Model Ordinances for Erosion and Sediment Control

    EPA Pesticide Factsheets

    The model ordinance in this section borrows language from the erosion and sediment control ordinance features that might help prevent erosion and sedimentation and protect natural resources more fully.

  19. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  20. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  1. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE PAGES

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...

    2016-01-26

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  2. Discrete Kinetic Eigenmode Spectra of Electron Plasma Oscillations in Weakly Collisional Plasma: A Numerical Study

    NASA Technical Reports Server (NTRS)

    Black, Carrie; Germaschewski, Kai; Bhattacharjee, Amitava; Ng, C. S.

    2013-01-01

    It has been demonstrated that in the presence of weak collisions, described by the Lenard-Bernstein collision operator, the Landau-damped solutions become true eigenmodes of the system and constitute a complete set. We present numerical results from an Eulerian Vlasov code that incorporates the Lenard-Bernstein collision operator. The effect of the collisions on the numerical recursion phenomenon seen in Vlasov codes is discussed. The code is benchmarked against exact linear eigenmode solutions in the presence of weak collisions, and a spectrum of Landau-damped solutions is determined within the limits of numerical resolution. Tests of the orthogonality and the completeness relation are presented.

  3. Three-dimensional Monte Carlo calculation of atmospheric thermal heating rates

    NASA Astrophysics Data System (ADS)

    Klinger, Carolin; Mayer, Bernhard

    2014-09-01

    We present a fast Monte Carlo method for thermal heating and cooling rates in three-dimensional atmospheres. These heating/cooling rates are relevant particularly in broken cloud fields. We compare forward and backward photon tracing methods and present new variance reduction methods to speed up the calculations. For this application it turns out that backward tracing is in most cases superior to forward tracing. Since heating rates may be either calculated as the difference between emitted and absorbed power per volume or alternatively from the divergence of the net flux, both approaches have been tested. We found that the absorption/emission method is superior (with respect to computational time for a given uncertainty) if the optical thickness of the grid box under consideration is smaller than about 5 while the net flux divergence may be considerably faster for larger optical thickness. In particular, we describe the following three backward tracing methods: the first and most simple method (EMABS) is based on a random emission of photons in the grid box of interest and a simple backward tracing. Since only those photons which cross the grid box boundaries contribute to the heating rate, this approach behaves poorly for large optical thicknesses which are common in the thermal spectral range. For this reason, the second method (EMABS_OPT) uses a variance reduction technique to improve the distribution of the photons in a way that more photons are started close to the grid box edges and thus contribute to the result which reduces the uncertainty. The third method (DENET) uses the flux divergence approach where - in backward Monte Carlo - all photons contribute to the result, but in particular for small optical thickness the noise becomes large. The three methods have been implemented in MYSTIC (Monte Carlo code for the phYSically correct Tracing of photons In Cloudy atmospheres). All methods are shown to agree within the photon noise with each other and with a discrete ordinate code for a one-dimensional case. Finally a hybrid method is built using a combination of EMABS_OPT and DENET, and application examples are shown. It should be noted that for this application, only little improvement is gained by EMABS_OPT compared to EMABS.

  4. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    NASA Technical Reports Server (NTRS)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  5. Glycans: bioactive signals decoded by lectins.

    PubMed

    Gabius, Hans-Joachim

    2008-12-01

    The glycan part of cellular glycoconjugates affords a versatile means to build biochemical signals. These oligosaccharides have an exceptional talent in this respect. They surpass any other class of biomolecule in coding capacity within an oligomer (code word). Four structural factors account for this property: the potential for variability of linkage points, anomeric position and ring size as well as the aptitude for branching (first and second dimensions of the sugar code). Specific intermolecular recognition is favoured by abundant potential for hydrogen/co-ordination bonds and for C-H/pi-interactions. Fittingly, an array of protein folds has developed in evolution with the ability to select certain glycans from the natural diversity. The thermodynamics of this reaction profits from the occurrence of these ligands in only a few energetically favoured conformers, comparing favourably with highly flexible peptides (third dimension of the sugar code). Sequence, shape and local aspects of glycan presentation (e.g. multivalency) are key factors to regulate the avidity of lectin binding. At the level of cells, distinct glycan determinants, a result of enzymatic synthesis and dynamic remodelling, are being defined as biomarkers. Their presence gains a functional perspective by co-regulation of the cognate lectin as effector, for example in growth regulation. The way to tie sugar signal and lectin together is illustrated herein for two tumour model systems. In this sense, orchestration of glycan and lectin expression is an efficient means, with far-reaching relevance, to exploit the coding potential of oligosaccharides physiologically and medically.

  6. Deep generative learning of location-invariant visual word recognition.

    PubMed

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.

  7. On the ordinality of numbers: A review of neural and behavioral studies.

    PubMed

    Lyons, I M; Vogel, S E; Ansari, D

    2016-01-01

    The last several years have seen steady growth in research on the cognitive and neuronal mechanisms underlying how numbers are represented as part of ordered sequences. In the present review, we synthesize what is currently known about numerical ordinality from behavioral and neuroimaging research, point out major gaps in our current knowledge, and propose several hypotheses that may bear further investigation. Evidence suggests that how we process ordinality differs from how we process cardinality, but that this difference depends strongly on context-in particular, whether numbers are presented symbolically or nonsymbolically. Results also reveal many commonalities between numerical and nonnumerical ordinal processing; however, the degree to which numerical ordinality can be reduced to domain-general mechanisms remains unclear. One proposal is that numerical ordinality relies upon more general short-term memory mechanisms as well as more numerically specific long-term memory representations. It is also evident that numerical ordinality is highly multifaceted, with symbolic representations in particular allowing for a wide range of different types of ordinal relations, the complexity of which appears to increase over development. We examine the proposal that these relations may form the basis of a richer set of associations that may prove crucial to the emergence of more complex math abilities and concepts. In sum, ordinality appears to be an important and relatively understudied facet of numerical cognition that presents substantial opportunities for new and ground-breaking research. © 2016 Elsevier B.V. All rights reserved.

  8. Letter report on a straw-man modification of an ATC transponder for discrete address use

    DOT National Transportation Integrated Search

    1974-05-01

    An experimental evaluation has been made of an RCA AVQ-65 airtraffic control transponder modified, in Mode D, so as to reply if and only if interrogated with its own preset reply code. Successful operation of the modified transponder was verified, an...

  9. INDOS: conversational computer codes to implement ICRP-10-10A models for estimation of internal radiation dose to man

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Killough, G.G.; Rohwer, P.S.

    1974-03-01

    INDOS1, INDOS2, and INDOS3 (the INDOS codes) are conversational FORTRAN IV programs, implemented for use in time-sharing mode on the ORNL PDP-10 System. These codes use ICRP10-10A models to estimate the radiation dose to an organ of the body of Reference Man resulting from the ingestion or inhalation of any one of various radionuclides. Two patterns of intake are simulated: intakes at discrete times and continuous intake at a constant rate. The IND0S codes provide tabular output of dose rate and dose vs time, graphical output of dose vs time, and punched-card output of organ burden and dose vs time.more » The models of internal dose calculation are discussed and instructions for the use of the INDOS codes are provided. The INDOS codes are available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, P. O. Box X, Oak Ridge, Tennessee 37830. (auth)« less

  10. Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Yun-Chung

    1989-01-01

    Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.

  11. XGC developments for a more efficient XGC-GENE code coupling

    NASA Astrophysics Data System (ADS)

    Dominski, Julien; Hager, Robert; Ku, Seung-Hoe; Chang, Cs

    2017-10-01

    In the Exascale Computing Program, the High-Fidelity Whole Device Modeling project initially aims at delivering a tightly-coupled simulation of plasma neoclassical and turbulence dynamics from the core to the edge of the tokamak. To permit such simulations, the gyrokinetic codes GENE and XGC will be coupled together. Numerical efforts are made to improve the numerical schemes agreement in the coupling region. One of the difficulties of coupling those codes together is the incompatibility of their grids. GENE is a continuum grid-based code and XGC is a Particle-In-Cell code using unstructured triangular mesh. A field-aligned filter is thus implemented in XGC. Even if XGC originally had an approximately field-following mesh, this field-aligned filter permits to have a perturbation discretization closer to the one solved in the field-aligned code GENE. Additionally, new XGC gyro-averaging matrices are implemented on a velocity grid adapted to the plasma properties, thus ensuring same accuracy from the core to the edge regions.

  12. The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; Wood, K.

    2018-04-01

    We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.

  13. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  14. Theoretical, Experimental, and Computational Evaluation of Disk-Loaded Circular Wave Guides

    NASA Technical Reports Server (NTRS)

    Wallett, Thomas M.; Qureshi, A. Haq

    1994-01-01

    A disk-loaded circular wave guide structure and test fixture were fabricated. The dispersion characteristics were found by theoretical analysis, experimental testing, and computer simulation using the codes ARGUS and SOS. Interaction impedances were computed based on the corresponding dispersion characteristics. Finally, an equivalent circuit model for one period of the structure was chosen using equivalent circuit models for cylindrical wave guides of different radii. Optimum values for the discrete capacitors and inductors describing discontinuities between cylindrical wave guides were found using the computer code TOUCHSTONE.

  15. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems

  16. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Robust image region descriptor using local derivative ordinal binary pattern

    NASA Astrophysics Data System (ADS)

    Shang, Jun; Chen, Chuanbo; Pei, Xiaobing; Liang, Hu; Tang, He; Sarem, Mudar

    2015-05-01

    Binary image descriptors have received a lot of attention in recent years, since they provide numerous advantages, such as low memory footprint and efficient matching strategy. However, they utilize intermediate representations and are generally less discriminative than floating-point descriptors. We propose an image region descriptor, namely local derivative ordinal binary pattern, for object recognition and image categorization. In order to preserve more local contrast and edge information, we quantize the intensity differences between the central pixels and their neighbors of the detected local affine covariant regions in an adaptive way. These differences are then sorted and mapped into binary codes and histogrammed with a weight of the sum of the absolute value of the differences. Furthermore, the gray level of the central pixel is quantized to further improve the discriminative ability. Finally, we combine them to form a joint histogram to represent the features of the image. We observe that our descriptor preserves more local brightness and edge information than traditional binary descriptors. Also, our descriptor is robust to rotation, illumination variations, and other geometric transformations. We conduct extensive experiments on the standard ETHZ and Kentucky datasets for object recognition and PASCAL for image classification. The experimental results show that our descriptor outperforms existing state-of-the-art methods.

  18. The use of generalized linear models and generalized estimating equations in bioarchaeological studies.

    PubMed

    Nikita, Efthymia

    2014-03-01

    The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.

  19. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  20. Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.

    PubMed

    Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian

    2015-03-01

    We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.

Top