Sample records for discrete ordinate method

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, E.W.

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  2. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  3. A Numerical Investigation of the Extinction of Low Strain Rate Diffusion Flames by an Agent in Microgravity

    NASA Technical Reports Server (NTRS)

    Puri, Ishwar K.

    2004-01-01

    Our goal has been to investigate the influence of both dilution and radiation on the extinction process of nonpremixed flames at low strain rates. Simulations have been performed by using a counterflow code and three radiation models have been included in it, namely, the optically thin, the narrowband, and discrete ordinate models. The counterflow flame code OPPDIFF was modified to account for heat transfer losses by radiation from the hot gases. The discrete ordinate method (DOM) approximation was first suggested by Chandrasekhar for solving problems in interstellar atmospheres. Carlson and Lathrop developed the method for solving multi-dimensional problem in neutron transport. Only recently has the method received attention in the field of heat transfer. Due to the applicability of the discrete ordinate method for thermal radiation problems involving flames, the narrowband code RADCAL was modified to calculate the radiative properties of the gases. A non-premixed counterflow flame was simulated with the discrete ordinate method for radiative emissions. In comparison with two other models, it was found that the heat losses were comparable with the optically thin and simple narrowband model. The optically thin model had the highest heat losses followed by the DOM model and the narrow-band model.

  4. Discrete ordinates-Monte Carlo coupling: A comparison of techniques in NERVA radiation analysis

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. G.; Normand, E.; Wilcox, A. D.

    1972-01-01

    In the radiation analysis of the NERVA nuclear rocket system, two-dimensional discrete ordinates calculations are sufficient to provide detail in the pressure vessel and reactor assembly. Other parts of the system, however, require three-dimensional Monte Carlo analyses. To use these two methods in a single analysis, a means of coupling was developed whereby the results of a discrete ordinates calculation can be used to produce source data for a Monte Carlo calculation. Several techniques for producing source detail were investigated. Results of calculations on the NERVA system are compared and limitations and advantages of the coupling techniques discussed.

  5. Ordinal preference elicitation methods in health economics and health services research: using discrete choice experiments and ranking methods.

    PubMed

    Ali, Shehzad; Ronaldson, Sarah

    2012-09-01

    The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.

  6. Shielding analyses: the rabbit vs the turtle?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadhead, B.L.

    1996-12-31

    This paper compares solutions using Monte Carlo and discrete- ordinates methods applied to two actual shielding situations in order to make some general observations concerning the efficiency and advantages/disadvantages of the two approaches. The discrete- ordinates solutions are performed using two-dimensional geometries, while the Monte Carlo approaches utilize three-dimensional geometries with both multigroup and point cross-section data.

  7. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  8. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  9. Comparison of approximate solutions to the phonon Boltzmann transport equation with the relaxation time approximation: Spherical harmonics expansions and the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Christenson, J. G.; Austin, R. A.; Phillips, R. J.

    2018-05-01

    The phonon Boltzmann transport equation is used to analyze model problems in one and two spatial dimensions, under transient and steady-state conditions. New, explicit solutions are obtained by using the P1 and P3 approximations, based on expansions in spherical harmonics, and are compared with solutions from the discrete ordinates method. For steady-state energy transfer, it is shown that analytic expressions derived using the P1 and P3 approximations agree quantitatively with the discrete ordinates method, in some cases for large Knudsen numbers, and always for Knudsen numbers less than unity. However, for time-dependent energy transfer, the PN solutions differ qualitatively from converged solutions obtained by the discrete ordinates method. Although they correctly capture the wave-like behavior of energy transfer at short times, the P1 and P3 approximations rely on one or two wave velocities, respectively, yielding abrupt, step-changes in temperature profiles that are absent when the angular dependence of the phonon velocities is captured more completely. It is shown that, with the gray approximation, the P1 approximation is formally equivalent to the so-called "hyperbolic heat equation." Overall, these results support the use of the PN approximation to find solutions to the phonon Boltzmann transport equation for steady-state conditions. Such solutions can be useful in the design and analysis of devices that involve heat transfer at nanometer length scales, where continuum-scale approaches become inaccurate.

  10. A multi-layer discrete-ordinate method for vector radiative transfer in a vertically-inhomogeneous, emitting and scattering atmosphere. I - Theory. II - Application

    NASA Technical Reports Server (NTRS)

    Weng, Fuzhong

    1992-01-01

    A theory is developed for discretizing the vector integro-differential radiative transfer equation including both solar and thermal radiation. A complete solution and boundary equations are obtained using the discrete-ordinate method. An efficient numerical procedure is presented for calculating the phase matrix and achieving computational stability. With natural light used as a beam source, the Stokes parameters from the model proposed here are compared with the analytical solutions of Chandrasekhar (1960) for a Rayleigh scattering atmosphere. The model is then applied to microwave frequencies with a thermal source, and the brightness temperatures are compared with those from Stamnes'(1988) radiative transfer model.

  11. Modifications Of Discrete Ordinate Method For Computations With High Scattering Anisotropy: Comparative Analysis

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2012-01-01

    A numerical accuracy analysis of the radiative transfer equation (RTE) solution based on separation of the diffuse light field into anisotropic and smooth parts is presented. The analysis uses three different algorithms based on the discrete ordinate method (DOM). Two methods, DOMAS and DOM2+, that do not use the truncation of the phase function, are compared against the TMS-method. DOMAS and DOM2+ use the Small-Angle Modification of RTE and the single scattering term, respectively, as an anisotropic part. The TMS method uses Delta-M method for truncation of the phase function along with the single scattering correction. For reference, a standard discrete ordinate method, DOM, is also included in analysis. The obtained results for cases with high scattering anisotropy show that at low number of streams (16, 32) only DOMAS provides an accurate solution in the aureole area. Outside of the aureole, the convergence and accuracy of DOMAS, and TMS is found to be approximately similar: DOMAS was found more accurate in cases with coarse aerosol and liquid water cloud models, except low optical depth, while the TMS showed better results in case of ice cloud.

  12. Application of the first collision source method to CSNS target station shielding calculation

    NASA Astrophysics Data System (ADS)

    Zheng, Ying; Zhang, Bin; Chen, Meng-Teng; Zhang, Liang; Cao, Bo; Chen, Yi-Xue; Yin, Wen; Liang, Tian-Jiao

    2016-04-01

    Ray effects are an inherent problem of the discrete ordinates method. RAY3D, a functional module of ARES, which is a discrete ordinates code system, employs a semi-analytic first collision source method to mitigate ray effects. This method decomposes the flux into uncollided and collided components, and then calculates them with an analytical method and discrete ordinates method respectively. In this article, RAY3D is validated by the Kobayashi benchmarks and applied to the neutron beamline shielding problem of China Spallation Neutron Source (CSNS) target station. The numerical results of the Kobayashi benchmarks indicate that the solutions of DONTRAN3D with RAY3D agree well with the Monte Carlo solutions. The dose rate at the end of the neutron beamline is less than 10.83 μSv/h in the CSNS target station neutron beamline shutter model. RAY3D can effectively mitigate the ray effects and obtain relatively reasonable results. Supported by Major National S&T Specific Program of Large Advanced Pressurized Water Reactor Nuclear Power Plant (2011ZX06004-007), National Natural Science Foundation of China (11505059, 11575061), and the Fundamental Research Funds for the Central Universities (13QN34).

  13. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  14. Two-dimensional HID light source radiative transfer using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Ghrib, Basma; Bouaoun, Mohamed; Elloumi, Hatem

    2016-08-01

    This paper shows the implementation of the Discrete Ordinates Method for handling radiation problems in High Intensity Discharge (HID) lamps. Therefore, we start with presenting this rigorous method for treatment of radiation transfer in a two-dimensional, axisymmetric HID lamp. Furthermore, the finite volume method is used for the spatial discretization of the Radiative Transfer Equation. The atom and electron densities were calculated using temperature profiles established by a 2D semi-implicit finite-element scheme for the solution of conservation equations relative to energy, momentum, and mass. Spectral intensities as a function of position and direction are first calculated, and then axial and radial radiative fluxes are evaluated as well as the net emission coefficient. The results are given for a HID mercury lamp on a line-by-line basis. A particular attention is paid on the 253.7 nm resonance and 546.1 nm green lines.

  15. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  16. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  17. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G., E-mail: maginot1@llnl.gov; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim E., E-mail: morel@tamu.edu

    This work presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  18. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  19. High-order solution methods for grey discrete ordinates thermal radiative transfer

    DOE PAGES

    Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.

    2016-09-29

    This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less

  20. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  1. Discrete ordinates solutions of nongray radiative transfer with diffusely reflecting walls

    NASA Technical Reports Server (NTRS)

    Menart, J. A.; Lee, Haeok S.; Kim, Tae-Kuk

    1993-01-01

    Nongray gas radiation in a plane parallel slab bounded by gray, diffusely reflecting walls is studied using the discrete ordinates method. The spectral equation of transfer is averaged over a narrow wavenumber interval preserving the spectral correlation effect. The governing equations are derived by considering the history of multiple reflections between two reflecting wails. A closure approximation is applied so that only a finite number of reflections have to be explicitly included. The closure solutions express the physics of the problem to a very high degree and show relatively little error. Numerical solutions are obtained by applying a statistical narrow-band model for gas properties and a discrete ordinates code. The net radiative wail heat fluxes and the radiative source distributions are obtained for different temperature profiles. A zeroth-degree formulation, where no wall reflection is handled explicitly, is sufficient to predict the radiative transfer accurately for most cases considered, when compared with increasingly accurate solutions based on explicitly tracing a larger number of wail reflections without any closure approximation applied.

  2. Specular reflection treatment for the 3D radiative transfer equation solved with the discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.

    The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less

  3. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  4. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  5. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  6. FDDO and DSMC analyses of rarefied gas flow through 2D nozzles

    NASA Technical Reports Server (NTRS)

    Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.

    1992-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas expanding through a two-dimensional nozzle and into a surrounding low-density environment. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, the variable hard sphere model is used as a molecular model and the no time counter method is employed as a collision sampling technique. The results of both the FDDO and the DSMC methods show good agreement. The FDDO method requires less computational effort than the DSMC method by factors of 10 to 40 in CPU time, depending on the degree of rarefaction.

  7. Numerical simulation of rarefied gas flow through a slit

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Jeng, Duen-Ren; De Witt, Kenneth J.; Chung, Chan-Hong

    1990-01-01

    Two different approaches, the finite-difference method coupled with the discrete-ordinate method (FDDO), and the direct-simulation Monte Carlo (DSMC) method, are used in the analysis of the flow of a rarefied gas from one reservoir to another through a two-dimensional slit. The cases considered are for hard vacuum downstream pressure, finite pressure ratios, and isobaric pressure with thermal diffusion, which are not well established in spite of the simplicity of the flow field. In the FDDO analysis, by employing the discrete-ordinate method, the Boltzmann equation simplified by a model collision integral is transformed to a set of partial differential equations which are continuous in physical space but are point functions in molecular velocity space. The set of partial differential equations are solved by means of a finite-difference approximation. In the DSMC analysis, three kinds of collision sampling techniques, the time counter (TC) method, the null collision (NC) method, and the no time counter (NTC) method, are used.

  8. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  9. Goal-based h-adaptivity of the 1-D diamond difference discrete ordinate method

    NASA Astrophysics Data System (ADS)

    Jeffers, R. S.; Kópházi, J.; Eaton, M. D.; Févotte, F.; Hülsemann, F.; Ragusa, J.

    2017-04-01

    The quantity of interest (QoI) associated with a solution of a partial differential equation (PDE) is not, in general, the solution itself, but a functional of the solution. Dual weighted residual (DWR) error estimators are one way of providing an estimate of the error in the QoI resulting from the discretisation of the PDE. This paper aims to provide an estimate of the error in the QoI due to the spatial discretisation, where the discretisation scheme being used is the diamond difference (DD) method in space and discrete ordinate (SN) method in angle. The QoI are reaction rates in detectors and the value of the eigenvalue (Keff) for 1-D fixed source and eigenvalue (Keff criticality) neutron transport problems respectively. Local values of the DWR over individual cells are used as error indicators for goal-based mesh refinement, which aims to give an optimal mesh for a given QoI.

  10. Radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper we analyze the accuracy and efficiency of several radiative transfer models for inferring cloud parameters from radiances measured by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR). The radiative transfer models are the exact discrete ordinate and matrix operator methods with matrix exponential, and the approximate asymptotic and equivalent Lambertian cloud models. To deal with the computationally expensive radiative transfer calculations, several acceleration techniques such as, for example, the telescoping technique, the method of false discrete ordinate, the correlated k-distribution method and the principal component analysis (PCA) are used. We found that, for the EPIC oxygen A-band absorption channel at 764 nm, the exact models using the correlated k-distribution in conjunction with PCA yield an accuracy better than 1.5% and a computation time of 18 s for radiance calculations at 5 viewing zenith angles.

  11. Implicitly causality enforced solution of multidimensional transient photon transport equation.

    PubMed

    Handapangoda, Chintha C; Premaratne, Malin

    2009-12-21

    A novel method for solving the multidimensional transient photon transport equation for laser pulse propagation in biological tissue is presented. A Laguerre expansion is used to represent the time dependency of the incident short pulse. Owing to the intrinsic causal nature of Laguerre functions, our technique automatically always preserve the causality constrains of the transient signal. This expansion of the radiance using a Laguerre basis transforms the transient photon transport equation to the steady state version. The resulting equations are solved using the discrete ordinates method, using a finite volume approach. Therefore, our method enables one to handle general anisotropic, inhomogeneous media using a single formulation but with an added degree of flexibility owing to the ability to invoke higher-order approximations of discrete ordinate quadrature sets. Therefore, compared with existing strategies, this method offers the advantage of representing the intensity with a high accuracy thus minimizing numerical dispersion and false propagation errors. The application of the method to one, two and three dimensional geometries is provided.

  12. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  13. Numerical Model of Multiple Scattering and Emission from Layering Snowpack for Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Liang, Z.

    2002-12-01

    The vector radiative transfer (VRT) equation is an integral-deferential equation to describe multiple scattering, absorption and transmission of four Stokes parameters in random scatter media. From the integral formal solution of VRT equation, the lower order solutions, such as the first-order scattering for a layer medium or the second order scattering for a half space, can be obtained. The lower order solutions are usually good at low frequency when high-order scattering is negligible. It won't be feasible to continue iteration for obtaining high order scattering solution because too many folds integration would be involved. In the space-borne microwave remote sensing, for example, the DMSP (Defense Meterological Satellite Program) SSM/I (Special Sensor Microwave/Imager) employed seven channels of 19, 22, 37 and 85GHz. Multiple scattering from the terrain surfaces such as snowpack cannot be neglected at these channels. The discrete ordinate and eigen-analysis method has been studied to take into account for multiple scattering and applied to remote sensing of atmospheric precipitation, snowpack etc. Snowpack was modeled as a layer of dense spherical particles, and the VRT for a layer of uniformly dense spherical particles has been numerically studied by the discrete ordinate method. However, due to surface melting and refrozen crusts, the snowpack undergoes stratifying to form inhomegeneous profiles of the ice grain size, fractional volume and physical temperature etc. It becomes necessary to study multiple scattering and emission from stratified snowpack of dense ice grains. But, the discrete ordinate and eigen-analysis method cannot be simply applied to multi-layers model, because numerically solving a set of multi-equations of VRT is difficult. Stratifying the inhomogeneous media into multi-slabs and employing the first order Mueller matrix of each thin slab, this paper developed an iterative method to derive high orders scattering solutions of whole scatter media. High order scattering and emission from inhomogeneous stratifying media of dense spherical particles are numerically obtained. The brightness temperature at low frequency such as 5.3 GHz without high order scattering and at SSM/I channels with high order scattering are obtained. This approach is also compared with the conventional discrete ordinate method for an uniform layer model. Numerical simulation for inhomogeneous snowpack is also compared with the measurements of microwave remote sensing.

  14. Radiative Transfer Modeling of a Large Pool Fire by Discrete Ordinates, Discrete Transfer, Ray Tracing, Monte Carlo and Moment Methods

    NASA Technical Reports Server (NTRS)

    Jensen, K. A.; Ripoll, J.-F.; Wray, A. A.; Joseph, D.; ElHafi, M.

    2004-01-01

    Five computational methods for solution of the radiative transfer equation in an absorbing-emitting and non-scattering gray medium were compared on a 2 m JP-8 pool fire. The temperature and absorption coefficient fields were taken from a synthetic fire due to the lack of a complete set of experimental data for fires of this size. These quantities were generated by a code that has been shown to agree well with the limited quantity of relevant data in the literature. Reference solutions to the governing equation were determined using the Monte Carlo method and a ray tracing scheme with high angular resolution. Solutions using the discrete transfer method, the discrete ordinate method (DOM) with both S(sub 4) and LC(sub 11) quadratures, and moment model using the M(sub 1) closure were compared to the reference solutions in both isotropic and anisotropic regions of the computational domain. DOM LC(sub 11) is shown to be the more accurate than the commonly used S(sub 4) quadrature technique, especially in anisotropic regions of the fire domain. This represents the first study where the M(sub 1) method was applied to a combustion problem occurring in a complex three-dimensional geometry. The M(sub 1) results agree well with other solution techniques, which is encouraging for future applications to similar problems since it is computationally the least expensive solution technique. Moreover, M(sub 1) results are comparable to DOM S(sub 4).

  15. PRIM versus CART in subgroup discovery: when patience is harmful.

    PubMed

    Abu-Hanna, Ameen; Nannings, Barry; Dongelmans, Dave; Hasman, Arie

    2010-10-01

    We systematically compare the established algorithms CART (Classification and Regression Trees) and PRIM (Patient Rule Induction Method) in a subgroup discovery task on a large real-world high-dimensional clinical database. Contrary to current conjectures, PRIM's performance was generally inferior to CART's. PRIM often considered "peeling of" a large chunk of data at a value of a relevant discrete ordinal variable unattractive, ultimately missing an important subgroup. This finding has considerable significance in clinical medicine where ordinal scores are ubiquitous. PRIM's utility in clinical databases would increase when global information about (ordinal) variables is better put to use and when the search algorithm keeps track of alternative solutions.

  16. Radiative transfer equation accounting for rotational Raman scattering and its solution by the discrete-ordinates method

    NASA Astrophysics Data System (ADS)

    Rozanov, Vladimir V.; Vountas, Marco

    2014-01-01

    Rotational Raman scattering of solar light in Earth's atmosphere leads to the filling-in of Fraunhofer and telluric lines observed in the reflected spectrum. The phenomenological derivation of the inelastic radiative transfer equation including rotational Raman scattering is presented. The different forms of the approximate radiative transfer equation with first-order rotational Raman scattering terms are obtained employing the Cabannes, Rayleigh, and Cabannes-Rayleigh scattering models. The solution of these equations is considered in the framework of the discrete-ordinates method using rigorous and approximate approaches to derive particular integrals. An alternative forward-adjoint technique is suggested as well. A detailed description of the model including the exact spectral matching and a binning scheme that significantly speeds up the calculations is given. The considered solution techniques are implemented in the radiative transfer software package SCIATRAN and a specified benchmark setup is presented to enable readers to compare with own results transparently.

  17. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  18. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  19. On the use of flux limiters in the discrete ordinates method for 3D radiation calculations in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Godoy, William F.; DesJardin, Paul E.

    2010-05-01

    The application of flux limiters to the discrete ordinates method (DOM), SN, for radiative transfer calculations is discussed and analyzed for 3D enclosures for cases in which the intensities are strongly coupled to each other such as: radiative equilibrium and scattering media. A Newton-Krylov iterative method (GMRES) solves the final systems of linear equations along with a domain decomposition strategy for parallel computation using message passing libraries in a distributed memory system. Ray effects due to angular discretization and errors due to domain decomposition are minimized until small variations are introduced by these effects in order to focus on the influence of flux limiters on errors due to spatial discretization, known as numerical diffusion, smearing or false scattering. Results are presented for the DOM-integrated quantities such as heat flux, irradiation and emission. A variety of flux limiters are compared to "exact" solutions available in the literature, such as the integral solution of the RTE for pure absorbing-emitting media and isotropic scattering cases and a Monte Carlo solution for a forward scattering case. Additionally, a non-homogeneous 3D enclosure is included to extend the use of flux limiters to more practical cases. The overall balance of convergence, accuracy, speed and stability using flux limiters is shown to be superior compared to step schemes for any test case.

  20. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  1. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    PubMed

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  2. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  3. Radiative heat transfer in strongly forward scattering media using the discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Granate, Pedro; Coelho, Pedro J.; Roger, Maxime

    2016-03-01

    The discrete ordinates method (DOM) is widely used to solve the radiative transfer equation, often yielding satisfactory results. However, in the presence of strongly forward scattering media, this method does not generally conserve the scattering energy and the phase function asymmetry factor. Because of this, the normalization of the phase function has been proposed to guarantee that the scattering energy and the asymmetry factor are conserved. Various authors have used different normalization techniques. Three of these are compared in the present work, along with two other methods, one based on the finite volume method (FVM) and another one based on the spherical harmonics discrete ordinates method (SHDOM). In addition, the approximation of the Henyey-Greenstein phase function by a different one is investigated as an alternative to the phase function normalization. The approximate phase function is given by the sum of a Dirac delta function, which accounts for the forward scattering peak, and a smoother scaled phase function. In this study, these techniques are applied to three scalar radiative transfer test cases, namely a three-dimensional cubic domain with a purely scattering medium, an axisymmetric cylindrical enclosure containing an emitting-absorbing-scattering medium, and a three-dimensional transient problem with collimated irradiation. The present results show that accurate predictions are achieved for strongly forward scattering media when the phase function is normalized in such a way that both the scattered energy and the phase function asymmetry factor are conserved. The normalization of the phase function may be avoided using the FVM or the SHDOM to evaluate the in-scattering term of the radiative transfer equation. Both methods yield results whose accuracy is similar to that obtained using the DOM along with normalization of the phase function. Very satisfactory predictions were also achieved using the delta-M phase function, while the delta-Eddington phase function and the transport approximation may perform poorly.

  4. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  5. Quasi-heterogeneous efficient 3-D discrete ordinates CANDU calculations using Attila

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preeti, T.; Rulko, R.

    2012-07-01

    In this paper, 3-D quasi-heterogeneous large scale parallel Attila calculations of a generic CANDU test problem consisting of 42 complete fuel channels and a perpendicular to fuel reactivity device are presented. The solution method is that of discrete ordinates SN and the computational model is quasi-heterogeneous, i.e. fuel bundle is partially homogenized into five homogeneous rings consistently with the DRAGON code model used by the industry for the incremental cross-section generation. In calculations, the HELIOS-generated 45 macroscopic cross-sections library was used. This approach to CANDU calculations has the following advantages: 1) it allows detailed bundle (and eventually channel) power calculationsmore » for each fuel ring in a bundle, 2) it allows the exact reactivity device representation for its precise reactivity worth calculation, and 3) it eliminates the need for incremental cross-sections. Our results are compared to the reference Monte Carlo MCNP solution. In addition, the Attila SN method performance in CANDU calculations characterized by significant up scattering is discussed. (authors)« less

  6. Modeling Personalized Email Prioritization: Classification-based and Regression-based Approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo S.; Yang, Y.; Carbonell, J.

    2011-10-24

    Email overload, even after spam filtering, presents a serious productivity challenge for busy professionals and executives. One solution is automated prioritization of incoming emails to ensure the most important are read and processed quickly, while others are processed later as/if time permits in declining priority levels. This paper presents a study of machine learning approaches to email prioritization into discrete levels, comparing ordinal regression versus classier cascades. Given the ordinal nature of discrete email priority levels, SVM ordinal regression would be expected to perform well, but surprisingly a cascade of SVM classifiers significantly outperforms ordinal regression for email prioritization. Inmore » contrast, SVM regression performs well -- better than classifiers -- on selected UCI data sets. This unexpected performance inversion is analyzed and results are presented, providing core functionality for email prioritization systems.« less

  7. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  8. Clouds in the atmospheres of extrasolar planets. IV. On the scattering greenhouse effect of CO2 ice particles: Numerical radiative transfer studies

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.; Patzer, A. B. C.; Rauer, H.

    2013-09-01

    Context. Owing to their wavelength-dependent absorption and scattering properties, clouds have a strong impact on the climate of planetary atmospheres. The potential greenhouse effect of CO2 ice clouds in the atmospheres of terrestrial extrasolar planets is of particular interest because it might influence the position and thus the extension of the outer boundary of the classic habitable zone around main sequence stars. Such a greenhouse effect, however, is a complicated function of the CO2 ice particles' optical properties. Aims: We study the radiative effects of CO2 ice particles obtained by different numerical treatments to solve the radiative transfer equation. To determine the effectiveness of the scattering greenhouse effect caused by CO2 ice clouds, the radiative transfer calculations are performed over the relevant wide range of particle sizes and optical depths, employing different numerical methods. Methods: We used Mie theory to calculate the optical properties of particle polydispersion. The radiative transfer calculations were done with a high-order discrete ordinate method (DISORT). Two-stream radiative transfer methods were used for comparison with previous studies. Results: The comparison between the results of a high-order discrete ordinate method and simpler two-stream approaches reveals large deviations in terms of a potential scattering efficiency of the greenhouse effect. The two-stream methods overestimate the transmitted and reflected radiation, thereby yielding a higher scattering greenhouse effect. For the particular case of a cool M-type dwarf, the CO2 ice particles show no strong effective scattering greenhouse effect by using the high-order discrete ordinate method, whereas a positive net greenhouse effect was found for the two-stream radiative transfer schemes. As a result, previous studies of the effects of CO2 ice clouds using two-stream approximations overrated the atmospheric warming caused by the scattering greenhouse effect. Consequently, the scattering greenhouse effect of CO2 ice particles seems to be less effective than previously estimated. In general, higher order radiative transfer methods are needed to describe the effects of CO2 ice clouds accurately as indicated by our numerical radiative transfer studies.

  9. Regenerating time series from ordinal networks.

    PubMed

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  10. Regenerating time series from ordinal networks

    NASA Astrophysics Data System (ADS)

    McCullough, Michael; Sakellariou, Konstantinos; Stemler, Thomas; Small, Michael

    2017-03-01

    Recently proposed ordinal networks not only afford novel methods of nonlinear time series analysis but also constitute stochastic approximations of the deterministic flow time series from which the network models are constructed. In this paper, we construct ordinal networks from discrete sampled continuous chaotic time series and then regenerate new time series by taking random walks on the ordinal network. We then investigate the extent to which the dynamics of the original time series are encoded in the ordinal networks and retained through the process of regenerating new time series by using several distinct quantitative approaches. First, we use recurrence quantification analysis on traditional recurrence plots and order recurrence plots to compare the temporal structure of the original time series with random walk surrogate time series. Second, we estimate the largest Lyapunov exponent from the original time series and investigate the extent to which this invariant measure can be estimated from the surrogate time series. Finally, estimates of correlation dimension are computed to compare the topological properties of the original and surrogate time series dynamics. Our findings show that ordinal networks constructed from univariate time series data constitute stochastic models which approximate important dynamical properties of the original systems.

  11. Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.

    2016-12-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.

  12. Computing Radiative Transfer in a 3D Medium

    NASA Technical Reports Server (NTRS)

    Von Allmen, Paul; Lee, Seungwon

    2012-01-01

    A package of software computes the time-dependent propagation of a narrow laser beam in an arbitrary three- dimensional (3D) medium with absorption and scattering, using the transient-discrete-ordinates method and a direct integration method. Unlike prior software that utilizes a Monte Carlo method, this software enables simulation at very small signal-to-noise ratios. The ability to simulate propagation of a narrow laser beam in a 3D medium is an improvement over other discrete-ordinate software. Unlike other direct-integration software, this software is not limited to simulation of propagation of thermal radiation with broad angular spread in three dimensions or of a laser pulse with narrow angular spread in two dimensions. Uses for this software include (1) computing scattering of a pulsed laser beam on a material having given elastic scattering and absorption profiles, and (2) evaluating concepts for laser-based instruments for sensing oceanic turbulence and related measurements of oceanic mixed-layer depths. With suitable augmentation, this software could be used to compute radiative transfer in ultrasound imaging in biological tissues, radiative transfer in the upper Earth crust for oil exploration, and propagation of laser pulses in telecommunication applications.

  13. Spectral collocation method with a flexible angular discretization scheme for radiative transfer in multi-layer graded index medium

    NASA Astrophysics Data System (ADS)

    Wei, Linyang; Qi, Hong; Sun, Jianping; Ren, Yatao; Ruan, Liming

    2017-05-01

    The spectral collocation method (SCM) is employed to solve the radiative transfer in multi-layer semitransparent medium with graded index. A new flexible angular discretization scheme is employed to discretize the solid angle domain freely to overcome the limit of the number of discrete radiative direction when adopting traditional SN discrete ordinate scheme. Three radial basis function interpolation approaches, named as multi-quadric (MQ), inverse multi-quadric (IMQ) and inverse quadratic (IQ) interpolation, are employed to couple the radiative intensity at the interface between two adjacent layers and numerical experiments show that MQ interpolation has the highest accuracy and best stability. Variable radiative transfer problems in double-layer semitransparent media with different thermophysical properties are investigated and the influence of these thermophysical properties on the radiative transfer procedure in double-layer semitransparent media is also analyzed. All the simulated results show that the present SCM with the new angular discretization scheme can predict the radiative transfer in multi-layer semitransparent medium with graded index efficiently and accurately.

  14. Introduction to the Theory of Atmospheric Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Buglia, J. J.

    1986-01-01

    The fundamental physical and mathematical principles governing the transmission of radiation through the atmosphere are presented, with emphasis on the scattering of visible and near-IR radiation. The classical two-stream, thin-atmosphere, and Eddington approximations, along with some of their offspring, are developed in detail, along with the discrete ordinates method of Chandrasekhar. The adding and doubling methods are discussed from basic principles, and references for further reading are suggested.

  15. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  16. Radiant heat exchange calculations in radiantly heated and cooled enclosures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, K.S.; Zhang, P.

    1995-08-01

    This paper presents the development of a three-dimensional mathematical model to compute the radiant heat exchange between surfaces separated by a transparent and/or opaque medium. The model formulation accommodates arbitrary arrangements of the interior surfaces, as well as arbitrary placement of obstacles within the enclosure. The discrete ordinates radiation model is applied and has the capability to analyze the effect of irregular geometries and diverse surface temperatures and radiative properties. The model is verified by comparing calculated heat transfer rates to heat transfer rates determined from the exact radiosity method for four different enclosures. The four enclosures were selected tomore » provide a wide range of verification. This three-dimensional model based on the discrete ordinates method can be applied to a building to assist the design engineer in sizing a radiant heating system. By coupling this model with a convective and conductive heat transfer model and a thermal comfort model, the comfort levels throughout the room can be easily and efficiently mapped for a given radiant heater location. In addition, objects such as airplanes, trucks, furniture, and partitions can be easily incorporated to determine their effect on the performance of the radiant heating system.« less

  17. Ex-vessel neutron dosimetry analysis for westinghouse 4-loop XL pressurized water reactor plant using the RadTrack{sup TM} Code System with the 3D parallel discrete ordinates code RAPTOR-M3G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Alpan, F. A.; Fischer, G.A.

    2011-07-01

    Traditional two-dimensional (2D)/one-dimensional (1D) SYNTHESIS methodology has been widely used to calculate fast neutron (>1.0 MeV) fluence exposure to reactor pressure vessel in the belt-line region. However, it is expected that this methodology cannot provide accurate fast neutron fluence calculation at elevations far above or below the active core region. A three-dimensional (3D) parallel discrete ordinates calculation for ex-vessel neutron dosimetry on a Westinghouse 4-Loop XL Pressurized Water Reactor has been done. It shows good agreement between the calculated results and measured results. Furthermore, the results show very different fast neutron flux values at some of the former plate locationsmore » and elevations above and below an active core than those calculated by a 2D/1D SYNTHESIS method. This indicates that for certain irregular reactor internal structures, where the fast neutron flux has a very strong local effect, it is required to use a 3D transport method to calculate accurate fast neutron exposure. (authors)« less

  18. Discontinuous finite element method for vector radiative transfer

    NASA Astrophysics Data System (ADS)

    Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping

    2017-03-01

    The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.

  19. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  20. Functional data analysis: An approach for environmental ordination and matching discrete with continuous observations

    EPA Science Inventory

    Investigators are frequently confronted with data sets that include both discrete observations and extended time series of environmental data that had been collected by autonomous recorders. Evaluating the relationships between these two kinds of data is challenging. A common a...

  1. Verification of Three Dimensional Triangular Prismatic Discrete Ordinates Transport Code ENSEMBLE-TRIZ by Comparison with Monte Carlo Code GMVP

    NASA Astrophysics Data System (ADS)

    Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi

    2014-06-01

    This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.

  2. A Bayesian hierarchical model for discrete choice data in health care.

    PubMed

    Antonio, Anna Liza M; Weiss, Robert E; Saigal, Christopher S; Dahan, Ely; Crespi, Catherine M

    2017-01-01

    In discrete choice experiments, patients are presented with sets of health states described by various attributes and asked to make choices from among them. Discrete choice experiments allow health care researchers to study the preferences of individual patients by eliciting trade-offs between different aspects of health-related quality of life. However, many discrete choice experiments yield data with incomplete ranking information and sparsity due to the limited number of choice sets presented to each patient, making it challenging to estimate patient preferences. Moreover, methods to identify outliers in discrete choice data are lacking. We develop a Bayesian hierarchical random effects rank-ordered multinomial logit model for discrete choice data. Missing ranks are accounted for by marginalizing over all possible permutations of unranked alternatives to estimate individual patient preferences, which are modeled as a function of patient covariates. We provide a Bayesian version of relative attribute importance, and adapt the use of the conditional predictive ordinate to identify outlying choice sets and outlying individuals with unusual preferences compared to the population. The model is applied to data from a study using a discrete choice experiment to estimate individual patient preferences for health states related to prostate cancer treatment.

  3. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  4. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    DTIC Science & Technology

    2016-09-01

    is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1

  5. Analytic approach to photoelectron transport.

    NASA Technical Reports Server (NTRS)

    Stolarski, R. S.

    1972-01-01

    The equation governing the transport of photoelectrons in the ionosphere is shown to be equivalent to the equation of radiative transfer. In the single-energy approximation this equation is solved in closed form by the method of discrete ordinates for isotropic scattering and for a single-constituent atmosphere. The results include prediction of the angular distribution of photoelectrons at all altitudes and, in particular, the angular distribution of the escape flux. The implications of these solutions in real atmosphere calculations are discussed.

  6. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  7. APC: A New Code for Atmospheric Polarization Computations

    NASA Technical Reports Server (NTRS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  8. Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yi, Ce

    With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.

  9. Heat transfer analysis of a lab scale solar receiver using the discrete ordinates model

    NASA Astrophysics Data System (ADS)

    Dordevich, Milorad C. W.

    This thesis documents the development, implementation and simulation outcomes of the Discrete Ordinates Radiation Model in ANSYS FLUENT simulating the radiative heat transfer occurring in the San Diego State University lab-scale Small Particle Heat Exchange Receiver. In tandem, it also serves to document how well the Discrete Ordinates Radiation Model results compared with those from the in-house developed Monte Carlo Ray Trace Method in a number of simplified geometries. The secondary goal of this study was the inclusion of new physics, specifically buoyancy. Implementation of an additional Monte Carlo Ray Trace Method software package known as VEGAS, which was specifically developed to model lab scale solar simulators and provide directional, flux and beam spread information for the aperture boundary condition, was also a goal of this study. Upon establishment of the model, test cases were run to understand the predictive capabilities of the model. It was shown that agreement within 15% was obtained against laboratory measurements made in the San Diego State University Combustion and Solar Energy Laboratory with the metrics of comparison being the thermal efficiency and outlet, wall and aperture quartz temperatures. Parametric testing additionally showed that the thermal efficiency of the system was very dependent on the mass flow rate and particle loading. It was also shown that the orientation of the small particle heat exchange receiver was important in attaining optimal efficiency due to the fact that buoyancy induced effects could not be neglected. The analyses presented in this work were all performed on the lab-scale small particle heat exchange receiver. The lab-scale small particle heat exchange receiver is 0.38 m in diameter by 0.51 m tall and operated with an input irradiation flux of 3 kWth and a nominal mass flow rate of 2 g/s with a suspended particle mass loading of 2 g/m3. Finally, based on acumen gained during the implementation and development of the model, a new and improved design was simulated to predict how the efficiency within the small particle heat exchange receiver could be improved through a few simple internal geometry design modifications. It was shown that the theoretical calculated efficiency of the small particle heat exchange receiver could be improved from 64% to 87% with adjustments to the internal geometry, mass flow rate, and mass loading.

  10. Ray Effect Mitigation Through Reference Frame Rotation

    DOE PAGES

    Tencer, John

    2016-05-01

    The discrete ordinates method is a popular and versatile technique for solving the radiative transport equation, a major drawback of which is the presence of ray effects. Mitigation of ray effects can yield significantly more accurate results and enhanced numerical stability for combined mode codes. Moreover, when ray effects are present, the solution is seen to be highly dependent upon the relative orientation of the geometry and the global reference frame. It is an undesirable property. A novel ray effect mitigation technique of averaging the computed solution for various reference frame orientations is proposed.

  11. An S N Algorithm for Modern Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Randal Scott

    2016-08-29

    LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.

  12. Flow of rarefied gases over two-dimensional bodies

    NASA Technical Reports Server (NTRS)

    Jeng, Duen-Ren; De Witt, Kenneth J.; Keith, Theo G., Jr.; Chung, Chan-Hong

    1989-01-01

    A kinetic-theory analysis is made of the flow of rarefied gases over two-dimensional bodies of arbitrary curvature. The Boltzmann equation simplified by a model collision integral is written in an arbitrary orthogonal curvilinear coordinate system, and solved by means of finite-difference approximation with the discrete ordinate method. A numerical code is developed which can be applied to any two-dimensional submerged body of arbitrary curvature for the flow regimes from free-molecular to slip at transonic Mach numbers. Predictions are made for the case of a right circular cylinder.

  13. Skyshine radiation from a pressurized water reactor containment dome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, W.H.

    1986-06-01

    The radiation dose rates resulting from airborne activities inside a postaccident pressurized water reactor containment are calculated by a discrete ordinates/Monte Carlo combined method. The calculated total dose rates and the skyshine component are presented as a function of distance from the containment at three different elevations for various gamma-ray source energies. The one-dimensional (ANISN code) is used to approximate the skyshine dose rates from the hemisphere dome, and the results are compared favorably to more rigorous results calculated by a three-dimensional Monte Carlo code.

  14. Graphical aids for visualizing and interpreting patterns in departures from agreement in ordinal categorical observer agreement data.

    PubMed

    Bangdiwala, Shrikant I

    2017-01-01

    When studying the agreement between two observers rating the same n units into the same k discrete ordinal categories, Bangdiwala (1985) proposed using the "agreement chart" to visually assess agreement. This article proposes that often it is more interesting to focus on the patterns of disagreement and visually understanding the departures from perfect agreement. The article reviews the use of graphical techniques for descriptively assessing agreement and disagreements, and also reviews some of the available summary statistics that quantify such relationships.

  15. Newsracks and the First Amendment.

    ERIC Educational Resources Information Center

    Stevens, George E.

    1989-01-01

    Discusses court cases dealing with whether a community may ban newsracks, how much discretion city officials may exercise in regulating vending machines, and what limitations in display and placement are reasonable. Finds that acceptable city ordinances are narrow and content neutral. (RS)

  16. Is Best-Worst Scaling Suitable for Health State Valuation? A Comparison with Discrete Choice Experiments.

    PubMed

    Krucien, Nicolas; Watson, Verity; Ryan, Mandy

    2017-12-01

    Health utility indices (HUIs) are widely used in economic evaluation. The best-worst scaling (BWS) method is being used to value dimensions of HUIs. However, little is known about the properties of this method. This paper investigates the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE). Using a parametric approach, we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals' preferences. Non parametric tests indicate that BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting that the BWS provides lower quality data. As a consequence, for both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and using HUI based on BWS data. Given existing evidence, it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  18. The finite element model for the propagation of light in scattering media: a direct method for domains with nonscattering regions.

    PubMed

    Arridge, S R; Dehghani, H; Schweiger, M; Okada, E

    2000-01-01

    We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.

  19. Power and sample size evaluation for the Cochran-Mantel-Haenszel mean score (Wilcoxon rank sum) test and the Cochran-Armitage test for trend.

    PubMed

    Lachin, John M

    2011-11-10

    The power of a chi-square test, and thus the required sample size, are a function of the noncentrality parameter that can be obtained as the limiting expectation of the test statistic under an alternative hypothesis specification. Herein, we apply this principle to derive simple expressions for two tests that are commonly applied to discrete ordinal data. The Wilcoxon rank sum test for the equality of distributions in two groups is algebraically equivalent to the Mann-Whitney test. The Kruskal-Wallis test applies to multiple groups. These tests are equivalent to a Cochran-Mantel-Haenszel mean score test using rank scores for a set of C-discrete categories. Although various authors have assessed the power function of the Wilcoxon and Mann-Whitney tests, herein it is shown that the power of these tests with discrete observations, that is, with tied ranks, is readily provided by the power function of the corresponding Cochran-Mantel-Haenszel mean scores test for two and R > 2 groups. These expressions yield results virtually identical to those derived previously for rank scores and also apply to other score functions. The Cochran-Armitage test for trend assesses whether there is an monotonically increasing or decreasing trend in the proportions with a positive outcome or response over the C-ordered categories of an ordinal independent variable, for example, dose. Herein, it is shown that the power of the test is a function of the slope of the response probabilities over the ordinal scores assigned to the groups that yields simple expressions for the power of the test. Copyright © 2011 John Wiley & Sons, Ltd.

  20. MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less

  1. Fire Suppression M and S Validation (Status and Challenges), Systems Fire Protection Information Exchange

    DTIC Science & Technology

    2015-10-14

    rate Kinetics •14 Species & 12 reactionsCombustion Model •Participating Media Discrete Ordinate Method •WSG model for CO2, H2O and SootRadiation Model...Inhibition of JP-8 Combustion Physical Acting Agents • Dilute heat • Dilute reactants Ex: water, nitrogen Chemical Acting Agents • Reduce flame...Release; distribution is unlimited 5 Overview of Reduced Kinetics Scheme for FM200 • R1: JP-8 + O2 => CO + CO2 + H2O • R2: CO + O2 <=> CO2 • R3: HFP + JP-8

  2. The LBM program at the EPFL/LOTUS Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    File, J.; Jassby, D.L.; Tsang, F.Y.

    1986-11-01

    An experimental program of neutron transport studies of the Lithium Blanket Module (LBM) is being carried out with the LOTUS point-neutron source facility at Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. Preliminary experiments use passive neutron dosimetry within the fuel rods in the LBM central zone, as well as, both thermal extraction and dissolution methods to assay tritium bred in Li/sub 2/O diagnostic wafers and LBM pellets. These measurements are being compared and reconciled with each other and with the predictions of two-dimensional discrete-ordinates and continuous-energy Monte-Carlo analyses of the Lotus/LBM system.

  3. Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    PubMed Central

    Flynn, Terry N; Louviere, Jordan J; Marley, Anthony AJ; Coast, Joanna; Peters, Tim J

    2008-01-01

    Background Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents. PMID:18945358

  4. 3D numerical modelling of the propagation of radiative intensity through a X-ray tomographied ligament

    NASA Astrophysics Data System (ADS)

    Le Hardy, David; Badri, Mohd Afeef; Rousseau, Benoit; Chupin, Sylvain; Rochais, Denis; Favennec, Yann

    2017-06-01

    In order to explain the macroscopic radiative behaviour of an open-cell ceramic foam, knowledge of its solid phase distribution in space and the radiative contributions by this solid phase is required. The solid phase in an open-cell ceramic foam is arranged as a porous skeleton, which is itself composed of an interconnected network of ligament. Typically, ligaments being based on the assembly of grains more or less compacted, exhibit an anisotropic geometry with a concave cross section having a lateral size of one hundred microns. Therefore, ligaments are likely to emit, absorb and scatter thermal radiation. This framework explains why experimental investigations at this scale must be developed to extract accurate homogenized radiative properties regardless the shape and size of ligaments. To support this development, a 3D numerical investigation of the radiative intensity propagation through a real world ligament, beforehand scanned by X-Ray micro-tomography, is presented in this paper. The Radiative Transfer Equation (RTE), applied to the resulting meshed volume, is solved by combining Discrete Ordinate Method (DOM) and Streamline upwind Petrov-Garlekin (SUPG) numerical scheme. A particular attention is paid to propose an improved discretization procedure (spatial and angular) based on ordinate parallelization with the aim to reach fast convergence. Towards the end of this article, we present the effects played by the local radiative properties of three ceramic materials (silicon carbide, alumina and zirconia), which are often used for designing open-cell refractory ceramic foams.

  5. PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C

    2007-09-01

    The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less

  6. Numerical Computation of Flame Spread over a Thin Solid in Forced Concurrent Flow with Gas-phase Radiation

    NASA Technical Reports Server (NTRS)

    Jiang, Ching-Biau; T'ien, James S.

    1994-01-01

    Excerpts from a paper describing the numerical examination of concurrent-flow flame spread over a thin solid in purely forced flow with gas-phase radiation are presented. The computational model solves the two-dimensional, elliptic, steady, and laminar conservation equations for mass, momentum, energy, and chemical species. Gas-phase combustion is modeled via a one-step, second order finite rate Arrhenius reaction. Gas-phase radiation considering gray non-scattering medium is solved by a S-N discrete ordinates method. A simplified solid phase treatment assumes a zeroth order pyrolysis relation and includes radiative interaction between the surface and the gas phase.

  7. Heat Transfer Modelling of Glass Media within TPV Systems

    NASA Astrophysics Data System (ADS)

    Bauer, Thomas; Forbes, Ian; Penlington, Roger; Pearsall, Nicola

    2004-11-01

    Understanding and optimisation of heat transfer, and in particular radiative heat transfer in terms of spectral, angular and spatial radiation distributions is important to achieve high system efficiencies and high electrical power densities for thermophtovoltaics (TPV). This work reviews heat transfer models and uses the Discrete Ordinates method. Firstly one-dimensional heat transfer in fused silica (quartz glass) shields was examined for the common arrangement, radiator-air-glass-air-PV cell. It has been concluded that an alternative arrangement radiator-glass-air-PV cell with increased thickness of fused silica should have advantages in terms of improved transmission of convertible radiation and enhanced suppression of non-convertible radiation.

  8. Measurement and modeling of advanced coal conversion processes, Volume II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G.

    1993-06-01

    A two dimensional, steady-state model for describing a variety of reactive and nonreactive flows, including pulverized coal combustion and gasification, is presented. The model, referred to as 93-PCGC-2 is applicable to cylindrical, axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using a discrete ordinates method. The particle phase is modeled in a lagrangian framework, such that mean paths of particle groups are followed. A new coal-general devolatilization submodel (FG-DVC) with coal swelling and char reactivity submodels has been added.

  9. Least squares regression methods for clustered ROC data with discrete covariates.

    PubMed

    Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton

    2016-07-01

    The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Efficient and Accurate Computation of Non-Negative Anisotropic Group Scattering Cross Sections for Discrete Ordinates and Monte Carlo Radiation Transport

    DTIC Science & Technology

    2002-07-01

    Date Kirk A. Mathews (Advisor) James T. Moore (Dean’s Representative) Charles J. Bridgman (Member...Adler-Adler, and Kalbach -Mann representations of the scatter cross sections that are used for some isotopes in ENDF/B-VI are not included. They are not

  11. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    PubMed

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  12. General Purpose Fortran Program for Discrete-Ordinate-Method Radiative Transfer in Scattering and Emitting Layered Media: An Update of DISORT

    NASA Technical Reports Server (NTRS)

    Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)

    2000-01-01

    This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.

  13. High mobility of large mass movements: a study by means of FEM/DEM simulations

    NASA Astrophysics Data System (ADS)

    Manzella, I.; Lisjak, A.; Grasselli, G.

    2013-12-01

    Large mass movements, such as rock avalanches and large volcanic debris avalanches are characterized by extremely long propagation, which cannot be modelled using normal sliding friction law. For this reason several studies and theories derived from field observation, physical theories and laboratory experiments, exist to try to explain their high mobility. In order to investigate more into deep some of the processes recalled by these theories, simulations have been run with a new numerical tool called Y-GUI based on the Finite Element-Discrete Element Method FEM/DEM. The FEM/DEM method is a numerical technique developed by Munjiza et al. (1995) where Discrete Element Method (DEM) algorithms are used to model the interaction between different solids, while Finite Element Method (FEM) principles are used to analyze their deformability being also able to explicitly simulate material sudden loss of cohesion (i.e. brittle failure). In particular numerical tests have been run, inspired by the small-scale experiments done by Manzella and Labiouse (2013). They consist of rectangular blocks released on a slope; each block is a rectangular discrete element made of a mesh of finite elements enabled to fragment. These simulations have highlighted the influence on the propagation of block packing, i.e. whether the elements are piled into geometrical ordinate structure before failure or they are chaotically disposed as a loose material, and of the topography, i.e. whether the slope break is smooth and regular or not. In addition the effect of fracturing, i.e. fragmentation, on the total runout have been studied and highlighted.

  14. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  15. Unified implicit kinetic scheme for steady multiscale heat transfer based on the phonon Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Zhang, Chuang; Guo, Zhaoli; Chen, Songze

    2017-12-01

    An implicit kinetic scheme is proposed to solve the stationary phonon Boltzmann transport equation (BTE) for multiscale heat transfer problem. Compared to the conventional discrete ordinate method, the present method employs a macroscopic equation to accelerate the convergence in the diffusive regime. The macroscopic equation can be taken as a moment equation for phonon BTE. The heat flux in the macroscopic equation is evaluated from the nonequilibrium distribution function in the BTE, while the equilibrium state in BTE is determined by the macroscopic equation. These two processes exchange information from different scales, such that the method is applicable to the problems with a wide range of Knudsen numbers. Implicit discretization is implemented to solve both the macroscopic equation and the BTE. In addition, a memory reduction technique, which is originally developed for the stationary kinetic equation, is also extended to phonon BTE. Numerical comparisons show that the present scheme can predict reasonable results both in ballistic and diffusive regimes with high efficiency, while the memory requirement is on the same order as solving the Fourier law of heat conduction. The excellent agreement with benchmark and the rapid converging history prove that the proposed macro-micro coupling is a feasible solution to multiscale heat transfer problems.

  16. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  17. Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression

    DTIC Science & Technology

    2016-01-01

    discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED  Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition  Unremarkable

  18. Penalized Ordinal Regression Methods for Predicting Stage of Cancer in High-Dimensional Covariate Spaces.

    PubMed

    Gentry, Amanda Elswick; Jackson-Cook, Colleen K; Lyon, Debra E; Archer, Kellie J

    2015-01-01

    The pathological description of the stage of a tumor is an important clinical designation and is considered, like many other forms of biomedical data, an ordinal outcome. Currently, statistical methods for predicting an ordinal outcome using clinical, demographic, and high-dimensional correlated features are lacking. In this paper, we propose a method that fits an ordinal response model to predict an ordinal outcome for high-dimensional covariate spaces. Our method penalizes some covariates (high-throughput genomic features) without penalizing others (such as demographic and/or clinical covariates). We demonstrate the application of our method to predict the stage of breast cancer. In our model, breast cancer subtype is a nonpenalized predictor, and CpG site methylation values from the Illumina Human Methylation 450K assay are penalized predictors. The method has been made available in the ordinalgmifs package in the R programming environment.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitzmann, D., E-mail: daniel.kitzmann@csh.unibe.ch

    Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO{sub 2} dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhousemore » effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone.« less

  20. Characteristic correlation study of UV disinfection performance for ballast water treatment

    NASA Astrophysics Data System (ADS)

    Ba, Te; Li, Hongying; Osman, Hafiiz; Kang, Chang-Wei

    2016-11-01

    Characteristic correlation between ultraviolet disinfection performance and operating parameters, including ultraviolet transmittance (UVT), lamp power and water flow rate, was studied by numerical and experimental methods. A three-stage model was developed to simulate the fluid flow, UV radiation and the trajectories of microorganisms. Navier-Stokes equation with k-epsilon turbulence was solved to model the fluid flow, while discrete ordinates (DO) radiation model and discrete phase model (DPM) were used to introduce UV radiation and microorganisms trajectories into the model, respectively. The UV dose statistical distribution for the microorganisms was found to move to higher value with the increase of UVT and lamp power, but moves to lower value when the water flow rate increases. Further investigation shows that the fluence rate increases exponentially with UVT but linearly with the lamp power. The average and minimum resident time decreases linearly with the water flow rate while the maximum resident time decrease rapidly in a certain range. The current study can be used as a digital design and performance evaluation tool of the UV reactor for ballast water treatment.

  1. A comparison of methods for converting DCE values onto the full health-dead QALY scale.

    PubMed

    Rowen, Donna; Brazier, John; Van Hout, Ben

    2015-04-01

    Preference elicitation techniques such as time trade-off (TTO) and standard gamble (SG) receive criticism for their complexity and difficulties of use. Ordinal techniques such as discrete choice experiment (DCE) are arguably easier to understand but generate values that are not anchored onto the full health-dead 1-0 quality-adjusted life-year (QALY) scale required for use in economic evaluation. This article compares existing methods for converting modeled DCE latent values onto the full health-dead QALY scale: 1) anchoring DCE values using dead as valued in the DCE and 2) anchoring DCE values using TTO value for worst state to 2 new methods: 3) mapping DCE values onto TTO and 4) combining DCE and TTO data in a hybrid model. Models are compared using their ability to predict mean TTO health state values. We use postal DCE data (n = 263) and TTO data (n = 307) collected by interview in a general population valuation study of an asthma condition-specific measure (AQL-5D). New methods 3 and 4 using mapping and hybrid models are better able to predict mean TTO health state values (mean absolute difference [MAD], 0.052-0.084) than the anchor-based methods (MAD, 0.075-0.093) and were better able to predict mean TTO health state values even when using in their estimation a subsample of the available TTO data. These new mapping and hybrid methods have a potentially useful role for producing values on the QALY scale from data elicited using ordinal techniques such as DCE for use in economic evaluation that makes best use of the desirable properties of each elicitation technique and elicited data. Further research is encouraged. © The Author(s) 2014.

  2. Least-squares collocation meshless approach for radiative heat transfer in absorbing and scattering media

    NASA Astrophysics Data System (ADS)

    Liu, L. H.; Tan, J. Y.

    2007-02-01

    A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.

  3. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  4. Simulating Ordinal Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Barbiero, Alessandro

    2012-01-01

    The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…

  5. WWER-1000 core and reflector parameters investigation in the LR-0 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.

    2006-07-01

    Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less

  6. Generalized Fokker-Planck theory for electron and photon transport in biological tissues: application to radiotherapy.

    PubMed

    Olbrant, Edgar; Frank, Martin

    2010-12-01

    In this paper, we study a deterministic method for particle transport in biological tissues. The method is specifically developed for dose calculations in cancer therapy and for radiological imaging. Generalized Fokker-Planck (GFP) theory [Leakeas and Larsen, Nucl. Sci. Eng. 137 (2001), pp. 236-250] has been developed to improve the Fokker-Planck (FP) equation in cases where scattering is forward-peaked and where there is a sufficient amount of large-angle scattering. We compare grid-based numerical solutions to FP and GFP in realistic medical applications. First, electron dose calculations in heterogeneous parts of the human body are performed. Therefore, accurate electron scattering cross sections are included and their incorporation into our model is extensively described. Second, we solve GFP approximations of the radiative transport equation to investigate reflectance and transmittance of light in biological tissues. All results are compared with either Monte Carlo or discrete-ordinates transport solutions.

  7. Delta Clipper-Experimental In-Ground Effect on Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1998-01-01

    A quasitransient in-ground effect method is developed to study the effect of vertical landing on a launch vehicle base-heating environment. This computational methodology is based on a three-dimensional, pressure-based, viscous flow, chemically reacting, computational fluid dynamics formulation. Important in-ground base-flow physics such as the fountain-jet formation, plume growth, air entrainment, and plume afterburning are captured with the present methodology. Convective and radiative base-heat fluxes are computed for comparison with those of a flight test. The influence of the laminar Prandtl number on the convective heat flux is included in this study. A radiative direction-dependency test is conducted using both the discrete ordinate and finite volume methods. Treatment of the plume afterburning is found to be very important for accurate prediction of the base-heat fluxes. Convective and radiative base-heat fluxes predicted by the model using a finite rate chemistry option compared reasonably well with flight-test data.

  8. An efficient direct solver for rarefied gas flows with arbitrary statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan

    2016-01-15

    A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less

  9. Comparison of Ordinal and Nominal Classification Trees to Predict Ordinal Expert-Based Occupational Exposure Estimates in a Case–Control Study

    PubMed Central

    Wheeler, David C.; Archer, Kellie J.; Burstyn, Igor; Yu, Kai; Stewart, Patricia A.; Colt, Joanne S.; Baris, Dalsu; Karagas, Margaret R.; Schwenn, Molly; Johnson, Alison; Armenti, Karla; Silverman, Debra T.; Friesen, Melissa C.

    2015-01-01

    Objectives: To evaluate occupational exposures in case–control studies, exposure assessors typically review each job individually to assign exposure estimates. This process lacks transparency and does not provide a mechanism for recreating the decision rules in other studies. In our previous work, nominal (unordered categorical) classification trees (CTs) generally successfully predicted expert-assessed ordinal exposure estimates (i.e. none, low, medium, high) derived from occupational questionnaire responses, but room for improvement remained. Our objective was to determine if using recently developed ordinal CTs would improve the performance of nominal trees in predicting ordinal occupational diesel exhaust exposure estimates in a case–control study. Methods: We used one nominal and four ordinal CT methods to predict expert-assessed probability, intensity, and frequency estimates of occupational diesel exhaust exposure (each categorized as none, low, medium, or high) derived from questionnaire responses for the 14983 jobs in the New England Bladder Cancer Study. To replicate the common use of a single tree, we applied each method to a single sample of 70% of the jobs, using 15% to test and 15% to validate each method. To characterize variability in performance, we conducted a resampling analysis that repeated the sample draws 100 times. We evaluated agreement between the tree predictions and expert estimates using Somers’ d, which measures differences in terms of ordinal association between predicted and observed scores and can be interpreted similarly to a correlation coefficient. Results: From the resampling analysis, compared with the nominal tree, an ordinal CT method that used a quadratic misclassification function and controlled tree size based on total misclassification cost had a slightly better predictive performance that was statistically significant for the frequency metric (Somers’ d: nominal tree = 0.61; ordinal tree = 0.63) and similar performance for the probability (nominal = 0.65; ordinal = 0.66) and intensity (nominal = 0.65; ordinal = 0.65) metrics. The best ordinal CT predicted fewer cases of large disagreement with the expert assessments (i.e. no exposure predicted for a job with high exposure and vice versa) compared with the nominal tree across all of the exposure metrics. For example, the percent of jobs with expert-assigned high intensity of exposure that the model predicted as no exposure was 29% for the nominal tree and 22% for the best ordinal tree. Conclusions: The overall agreements were similar across CT models; however, the use of ordinal models reduced the magnitude of the discrepancy when disagreements occurred. As the best performing model can vary by situation, researchers should consider evaluating multiple CT methods to maximize the predictive performance within their data. PMID:25433003

  10. Important features of home-based support services for older Australians and their informal carers.

    PubMed

    McCaffrey, Nikki; Gill, Liz; Kaambwa, Billingsley; Cameron, Ian D; Patterson, Jan; Crotty, Maria; Ratcliffe, Julie

    2015-11-01

    In Australia, newly initiated, publicly subsidised 'Home-Care Packages' designed to assist older people (≥ 65 years of age) living in their own home must now be offered on a 'consumer-directed care' (CDC) basis by service providers. However, CDC models have largely developed in the absence of evidence on users' views and preferences. The aim of this study was to determine what features (attributes) of consumer-directed, home-based support services are important to older people and their informal carers to inform the design of a discrete choice experiment (DCE). Semi-structured, face-to-face interviews were conducted in December 2012-November 2013 with 17 older people receiving home-based support services and 10 informal carers from 5 providers located in South Australia and New South Wales. Salient service characteristics important to participants were determined using thematic and constant comparative analysis and formulated into attributes and attribute levels for presentation within a DCE. Initially, eight broad themes were identified: information and knowledge, choice and control, self-managed continuum, effective co-ordination, effective communication, responsiveness and flexibility, continuity and planning. Attributes were formulated for the DCE by combining overlapping themes such as effective communication and co-ordination, and the self-managed continuum and planning into single attributes. Six salient service features that characterise consumer preferences for the provision of home-based support service models were identified: choice of provider, choice of support worker, flexibility in care activities provided, contact with the service co-ordinator, managing the budget and saving unspent funds. Best practice indicates that qualitative research with individuals who represent the population of interest should guide attribute selection for a DCE and this is the first study to employ such methods in aged care service provision. Further development of services could incorporate methods of consumer engagement such as DCEs which facilitate the identification and quantification of users' views and preferences on alternative models of delivery. © 2015 John Wiley & Sons Ltd.

  11. Coordinate control of initiative mating device for autonomous underwater vehicle based on TDES

    NASA Astrophysics Data System (ADS)

    Yan, Zhe-Ping; Hou, Shu-Ping

    2005-06-01

    A novel initiative mating device, which has four 2-degree manipulators around the mating skirt, is proposed to mate between a skirt of AUV (autonomons underwater vehicle) and a disabled submarine. The primary function of the device is to keep exact mating between skirt and disabled submarine in a badly sub sea environment. According to the characteristic of rescue, an automaton model is brought foward to describe the mating proceed between AUV and manipulators. The coordinated control is implemented by the TDES (time discrete event system). After taking into account the time problem, it is a useful method to control mating by simulation testing. The result shows that it reduces about 70 seconds after using intelligent co-ordinate control based on TDES through the whole mating procedure.

  12. Revisiting the Scattering Greenhouse Effect of CO2 Ice Clouds

    NASA Astrophysics Data System (ADS)

    Kitzmann, D.

    2016-02-01

    Carbon dioxide ice clouds are thought to play an important role for cold terrestrial planets with thick CO2 dominated atmospheres. Various previous studies showed that a scattering greenhouse effect by carbon dioxide ice clouds could result in a massive warming of the planetary surface. However, all of these studies only employed simplified two-stream radiative transfer schemes to describe the anisotropic scattering. Using accurate radiative transfer models with a general discrete ordinate method, this study revisits this important effect and shows that the positive climatic impact of carbon dioxide clouds was strongly overestimated in the past. The revised scattering greenhouse effect can have important implications for the early Mars, but also for planets like the early Earth or the position of the outer boundary of the habitable zone.

  13. Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models

    ERIC Educational Resources Information Center

    Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao

    2010-01-01

    Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…

  14. Correlational Analysis of Ordinal Data: From Pearson's "r" to Bayesian Polychoric Correlation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Peters, Michelle; Mueller, Ralph O.

    2010-01-01

    Correlational analyses are one of the most popular quantitative methods, yet also one of the mostly frequently misused methods in social and behavioral research, especially when analyzing ordinal data from Likert or other rating scales. Although several correlational analysis options have been developed for ordinal data, there seems to be a lack…

  15. Proposed Ordinance for the Regulation of Cable Television. Working Draft.

    ERIC Educational Resources Information Center

    Chicago City Council, IL.

    A model ordinance is proposed for the regulation of cable television in the city of Chicago. It defines the language of the ordinance, sets forth the method of granting franchises, and describes the terms of the franchises. The duties of a commission to regulate cable television are listed and the method of selecting commission members is…

  16. The assignment of scores procedure for ordinal categorical data.

    PubMed

    Chen, Han-Ching; Wang, Nae-Sheng

    2014-01-01

    Ordinal data are the most frequently encountered type of data in the social sciences. Many statistical methods can be used to process such data. One common method is to assign scores to the data, convert them into interval data, and further perform statistical analysis. There are several authors who have recently developed assigning score methods to assign scores to ordered categorical data. This paper proposes an approach that defines an assigning score system for an ordinal categorical variable based on underlying continuous latent distribution with interpretation by using three case study examples. The results show that the proposed score system is well for skewed ordinal categorical data.

  17. Iterative discrete ordinates solution of the equation for surface-reflected radiance

    NASA Astrophysics Data System (ADS)

    Radkevich, Alexander

    2017-11-01

    This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.

  18. Numerical solutions of ideal quantum gas dynamical flows governed by semiclassical ellipsoidal-statistical distribution.

    PubMed

    Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin

    2014-01-08

    The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al . 2012 Proc. R. Soc. A 468 , 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas.

  19. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  20. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  1. Estimated breeding values for canine hip dysplasia radiographic traits in a cohort of Australian German Shepherd dogs.

    PubMed

    Wilson, Bethany J; Nicholas, Frank W; James, John W; Wade, Claire M; Thomson, Peter C

    2013-01-01

    Canine hip dysplasia (CHD) is a serious and common musculoskeletal disease of pedigree dogs and therefore represents both an important welfare concern and an imperative breeding priority. The typical heritability estimates for radiographic CHD traits suggest that the accuracy of breeding dog selection could be substantially improved by the use of estimated breeding values (EBVs) in place of selection based on phenotypes of individuals. The British Veterinary Association/Kennel Club scoring method is a complex measure composed of nine bilateral ordinal traits, intended to evaluate both early and late dysplastic changes. However, the ordinal nature of the traits may represent a technical challenge for calculation of EBVs using linear methods. The purpose of the current study was to calculate EBVs of British Veterinary Association/Kennel Club traits in the Australian population of German Shepherd Dogs, using linear (both as individual traits and a summed phenotype), binary and ordinal methods to determine the optimal method for EBV calculation. Ordinal EBVs correlated well with linear EBVs (r = 0.90-0.99) and somewhat well with EBVs for the sum of the individual traits (r = 0.58-0.92). Correlation of ordinal and binary EBVs varied widely (r = 0.24-0.99) depending on the trait and cut-point considered. The ordinal EBVs have increased accuracy (0.48-0.69) of selection compared with accuracies from individual phenotype-based selection (0.40-0.52). Despite the high correlations between linear and ordinal EBVs, the underlying relationship between EBVs calculated by the two methods was not always linear, leading us to suggest that ordinal models should be used wherever possible. As the population of German Shepherd Dogs which was studied was purportedly under selection for the traits studied, we examined the EBVs for evidence of a genetic trend in these traits and found substantial genetic improvement over time. This study suggests the use of ordinal EBVs could increase the rate of genetic improvement in this population.

  2. The effect of ordinances requiring smoke-free restaurants and bars on revenues: a follow-up.

    PubMed Central

    Glantz, S A; Smith, L R

    1997-01-01

    OBJECTIVES: The purpose of this study was to extend an earlier evaluation of the economic effects of ordinances requiring smoke-free restaurants and bars. METHODS: Sales tax data for 15 cities with smoke-free restaurant ordinances, 5 cities and 2 counties with smoke-free bar ordinances, and matched comparison locations were analyzed by multiple regression, including time and a dummy variable for the ordinance. RESULTS: Ordinances had no significant effect on the fraction of total retail sales that went to eating and drinking places or on the ratio between sales in communities with ordinances and sales in comparison communities. Ordinances requiring smoke-free bars had no significant effect on the fraction of revenues going to eating and drinking places that serve all types of liquor. CONCLUSIONS: Smoke-free ordinances do not adversely affect either restaurant or bar sales. PMID:9357356

  3. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  4. Calculations of the skyshine gamma-ray dose rates from independent spent fuel storage installations (ISFSI) under worst case accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Cramer, S.N.; Knight, J.R.

    1980-09-01

    Calculations of the skyshine gamma-ray dose rates from three spent fuel storage pools under worst case accident conditions have been made using the discrete ordinates code DOT-IV and the Monte Carlo code MORSE and have been compared to those of two previous methods. The DNA 37N-21G group cross-section library was utilized in the calculations, together with the Claiborne-Trubey gamma-ray dose factors taken from the same library. Plots of all results are presented. It was found that the dose was a strong function of the iron thickness over the fuel assemblies, the initial angular distribution of the emitted radiation, and themore » photon source near the top of the assemblies. 16 refs., 11 figs., 7 tabs.« less

  5. Shock-wave structure in a partially ionized gas

    NASA Technical Reports Server (NTRS)

    Lu, C. S.; Huang, A. B.

    1974-01-01

    The structure of a steady plane shock in a partially ionized gas has been investigated using the Boltzmann equation with a kinetic model as the governing equation and the discrete ordinate method as a tool. The effects of the electric field induced by the charge separation on the shock structure have also been studied. Although the three species of an ionized gas travel with approximately the same macroscopic velocity, the individual distribution functions are found to be very different. In a strong shock the atom distribution function may have double peaks, while the ion distribution function has only one peak. Electrons are heated up much earlier than ions and atoms in a partially ionized gas. Because the interactions of electrons with atoms and with ions are different, the ion temperature can be different from the atom temperature.

  6. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  7. Calculation and experimental validation of spectral properties of microsize grains surrounded by nanoparticles.

    PubMed

    Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong

    2014-04-07

    Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.

  8. Satellite Remote Sensing of Tropical Precipitation and Ice Clouds for GCM Verification

    NASA Technical Reports Server (NTRS)

    Evans, K. Franklin

    2001-01-01

    This project, supported by the NASA New Investigator Program, has primarily been funding a graduate student, Darren McKague. Since August 1999 Darren has been working part time at Raytheon, while continuing his PhD research. Darren is planning to finish his thesis work in May 2001, thus some of the work described here is ongoing. The proposed research was to use GOES visible and infrared imager data and SSM/I microwave data to obtain joint distributions of cirrus cloud ice mass and precipitation for a study region in the Eastern Tropical Pacific. These joint distributions of cirrus cloud and rainfall were to be compared to those from the CSU general circulation model to evaluate the cloud microphysical amd cumulus parameterizations in the GCM. Existing algorithms were to be used for the retrieval of cloud ice water path from GOES (Minnis) and rainfall from SSM/I (Wilheit). A theoretical study using radiative transfer models and realistic variations in cloud and precipitation profiles was to be used to estimate the retrieval errors. Due to the unavailability of the GOES satellite cloud retrieval algorithm from Dr. Minnis (a co-PI), there was a change in the approach and emphasis of the project. The new approach was to develop a completely new type of remote sensing algorithm - one to directly retrieve joint probability density functions (pdf's) of cloud properties from multi-dimensional histograms of satellite radiances. The usual approach is to retrieve individual pixels of variables (i.e. cloud optical depth), and then aggregate the information. Only statistical information is actually needed, however, and so a more direct method is desirable. We developed forward radiative transfer models for the SSM/I and GOES channels, originally for testing the retrieval algorithms. The visible and near infrared ice scattering information is obtained from geometric ray tracing of fractal ice crystals (Andreas Macke), while the mid-infrared and microwave scattering is computed with Mie scattering. The radiative transfer is performed with the Spherical Harmonic Discrete Ordinate Method (developed by the PI), and infrared molecular absorption is included with the correlated k-distribution method. The SHDOM radiances have been validated by comparison to version 2 of DISORT (the community "standard" discrete-ordinates radiative transfer model), however we use SHDOM since it is computationally more efficient.

  9. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  10. Numerical solutions of the semiclassical Boltzmann ellipsoidal-statistical kinetic model equation

    PubMed Central

    Yang, Jaw-Yen; Yan, Chin-Yuan; Huang, Juan-Chen; Li, Zhihui

    2014-01-01

    Computations of rarefied gas dynamical flows governed by the semiclassical Boltzmann ellipsoidal-statistical (ES) kinetic model equation using an accurate numerical method are presented. The semiclassical ES model was derived through the maximum entropy principle and conserves not only the mass, momentum and energy, but also contains additional higher order moments that differ from the standard quantum distributions. A different decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. The numerical method in phase space combines the discrete-ordinate method in momentum space and the high-resolution shock capturing method in physical space. Numerical solutions of two-dimensional Riemann problems for two configurations covering various degrees of rarefaction are presented and various contours of the quantities unique to this new model are illustrated. When the relaxation time becomes very small, the main flow features a display similar to that of ideal quantum gas dynamics, and the present solutions are found to be consistent with existing calculations for classical gas. The effect of a parameter that permits an adjustable Prandtl number in the flow is also studied. PMID:25104904

  11. Benchmark gamma-ray skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, R.R.; Shultis, J.K.; Faw, R.E.

    1982-01-01

    A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.

  12. S4 solution of the transport equation for eigenvalues using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Öztürk, Hakan; Bülbül, Ahmet

    2017-09-01

    Numerical solution of the transport equation for monoenergetic neutrons scattered isotropically through the medium of a finite homogeneous slab is studied for the determination of the eigenvalues. After obtaining the discrete ordinates form of the transport equation, separated homogeneous and particular solutions are formed and then the eigenvalues are calculated using the Gauss-Legendre quadrature set. Then, the calculated eigenvalues for various values of the c0, the mean number of secondary neutrons per collision, are given in the tables.

  13. Extending radiative transfer models by use of Bayes rule. [in atmospheric science

    NASA Technical Reports Server (NTRS)

    Whitney, C.

    1977-01-01

    This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.

  14. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  15. Introducing Students to Plant Geography: Polar Ordination Applied to Hanging Gardens.

    ERIC Educational Resources Information Center

    Malanson, George P.; And Others

    1993-01-01

    Reports on a research study in which college students used a statistical ordination method to reveal relationships among plant community structures and physical, disturbance, and spatial variables. Concludes that polar ordination helps students understand the methodology of plant geography and encourages further student research. (CFR)

  16. Clean Indoor Air Ordinance Coverage in the Appalachian Region of the United States

    PubMed Central

    Liber, Alex; Pennell, Michael; Nealy, Darren; Hammer, Jana; Berman, Micah

    2010-01-01

    Objectives. We sought to quantitatively examine the pattern of, and socioeconomic factors associated with, adoption of clean indoor air ordinances in Appalachia. Methods. We collected and reviewed clean indoor air ordinances in Appalachian communities in 6 states and rated the ordinances for completeness of coverage in workplaces, restaurants, and bars. Additionally, we computed a strength score to measure coverage in 7 locations. We fit mixed-effects models to determine whether the presence of a comprehensive ordinance and the ordinance strength were related to community socioeconomic disadvantage. Results. Of the 332 communities included in the analysis, fewer than 20% had adopted a comprehensive workplace, restaurant, or bar ordinance. Most ordinances were weak, achieving on average only 43% of the total possible points. Communities with a higher unemployment rate were less likely and those with a higher education level were more likely to have a strong ordinance. Conclusions. The majority of residents in these communities are not protected from secondhand smoke. Efforts to pass strong statewide clean indoor air laws should take priority over local initiatives in these states. PMID:20466957

  17. Visualization of nuclear particle trajectories in nuclear oil-well logging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Case, C.R.; Chiaramonte, J.M.

    Nuclear oil-well logging measures specific properties of subsurface geological formations as a function of depth in the well. The knowledge gained is used to evaluate the hydrocarbon potential of the surrounding oil field. The measurements are made by lowering an instrument package into an oil well and slowly extracting it at a constant speed. During the extraction phase, neutrons or gamma rays are emitted from the tool, interact with the formation, and scatter back to the detectors located within the tool. Even though only a small percentage of the emitted particles ever reach the detectors, mathematical modeling has been verymore » successful in the accurate prediction of these detector responses. The two dominant methods used to model these devices have been the two-dimensional discrete ordinates method and the three-dimensional Monte Carlo method has routinely been used to investigate the response characteristics of nuclear tools. A special Los Alamos National Laboratory version of their standard MCNP Monte carlo code retains the details of each particle history of later viewing within SABRINA, a companion three-dimensional geometry modeling and debugging code.« less

  18. Numerical solutions of ideal quantum gas dynamical flows governed by semiclassical ellipsoidal-statistical distribution

    PubMed Central

    Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin

    2014-01-01

    The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799–1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi–Dirac or Bose–Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919

  19. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  20. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  1. Non-destructive testing of ceramic materials using mid-infrared ultrashort-pulse laser

    NASA Astrophysics Data System (ADS)

    Sun, S. C.; Qi, Hong; An, X. Y.; Ren, Y. T.; Qiao, Y. B.; Ruan, Liming M.

    2018-04-01

    The non-destructive testing (NDT) of ceramic materials using mid-infrared ultrashort-pulse laser is investigated in this study. The discrete ordinate method is applied to solve the transient radiative transfer equation in 2D semitransparent medium and the emerging radiative intensity on boundary serves as input for the inverse analysis. The sequential quadratic programming algorithm is employed as the inverse technique to optimize objective function, in which the gradient of objective function with respect to reconstruction parameters is calculated using the adjoint model. Two reticulated porous ceramics including partially stabilized zirconia and oxide-bonded silicon carbide are tested. The retrieval results show that the main characteristics of defects such as optical properties, geometric shapes and positions can be accurately reconstructed by the present model. The proposed technique is effective and robust in NDT of ceramics even with measurement errors.

  2. Rarefied gas flow through two-dimensional nozzles

    NASA Technical Reports Server (NTRS)

    De Witt, Kenneth J.; Jeng, Duen-Ren; Keith, Theo G., Jr.; Chung, Chan-Hong

    1989-01-01

    A kinetic theory analysis is made of the flow of a rarefied gas from one reservoir to another through two-dimensional nozzles with arbitrary curvature. The Boltzmann equation simplified by a model collision integral is solved by means of finite-difference approximations with the discrete ordinate method. The physical space is transformed by a general grid generation technique and the velocity space is transformed to a polar coordinate system. A numerical code is developed which can be applied to any two-dimensional passage of complicated geometry for the flow regimes from free-molecular to slip. Numerical values of flow quantities can be calculated for the entire physical space including both inside the nozzle and in the outside plume. Predictions are made for the case of parallel slots and compared with existing literature data. Also, results for the cases of convergent or divergent slots and two-dimensional nozzles with arbitrary curvature at arbitrary knudsen number are presented.

  3. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  4. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    PubMed

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  5. Inferring network structure in non-normal and mixed discrete-continuous genomic data

    PubMed Central

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2017-01-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848

  6. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  7. Tycho 2: A Proxy Application for Kinetic Transport Sweeps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, Charles Kristopher; Warsa, James S.

    2016-09-14

    Tycho 2 is a proxy application that implements discrete ordinates (SN) kinetic transport sweeps on unstructured, 3D, tetrahedral meshes. It has been designed to be small and require minimal dependencies to make collaboration and experimentation as easy as possible. Tycho 2 has been released as open source software. The software is currently in a beta release with plans for a stable release (version 1.0) before the end of the year. The code is parallelized via MPI across spatial cells and OpenMP across angles. Currently, several parallelization algorithms are implemented.

  8. Shielding Analyses for VISION Beam Line at SNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popova, Irina; Gallmeier, Franz X

    2014-01-01

    Full-scale neutron and gamma transport analyses were performed to design shielding around the VISION beam line, instrument shielding enclosure, beam stop, secondary shutter including a temporary beam stop for the still closed neighboring beam line to meet requirement is to achieve dose rates below 0.25 mrem/h at 30 cm from the shielding surface. The beam stop and the temporary beam stop analyses were performed with the discrete ordinate code DORT additionally to Monte Carlo analyses with the MCNPX code. Comparison of the results is presented.

  9. The effect of ordinances requiring smoke-free restaurants on restaurant sales.

    PubMed Central

    Glantz, S A; Smith, L R

    1994-01-01

    OBJECTIVES: The effect on restaurant revenues of local ordinances requiring smoke-free restaurants is an important consideration for restauranteurs themselves and the cities that depend on sales tax revenues to provide services. METHODS: Data were obtained from the California State Board of Equalization and Colorado State Department of Revenue on taxable restaurant sales from 1986 (1982 for Aspen) through 1993 for all 15 cities where ordinances were in force, as well as for 15 similar control communities without smoke-free ordinances during this period. These data were analyzed using multiple regression, including time and a dummy variable for whether an ordinance was in force. Total restaurant sales were analyzed as a fraction of total retail sales and restaurant sales in smoke-free cities vs the comparison cities similar in population, median income, and other factors. RESULTS. Ordinances had no significant effect on the fraction of total retail sales that went to restaurants or on the ratio of restaurant sales in communities with ordinances compared with those in the matched control communities. CONCLUSIONS. Smoke-free restaurant ordinances do not adversely affect restaurant sales. PMID:8017529

  10. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    NASA Astrophysics Data System (ADS)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  11. ANOVA with Rasch Measures.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Various methods of estimating main effects from ordinal data are presented and contrasted. Problems discussed include: (1) at what level to accumulate ordinal data into linear measures; (2) how to maintain scaling across analyses; and (3) the inevitable confounding of within cell variance with measurement error. An example shows three methods of…

  12. Ordinal convolutional neural networks for predicting RDoC positive valence psychiatric symptom severity scores.

    PubMed

    Rios, Anthony; Kavuluru, Ramakanth

    2017-11-01

    The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100·(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. In this paper, we present a method that successfully uses wide features and an ordinal loss function applied to convolutional neural networks for ordinal text classification specifically in predicting psychiatric symptom severity scores. Our approach leads to excellent performance on the N-GRID shared task and is also amenable to interpretability using existing model-agnostic approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Quantitative characterisation of audio data by ordinal symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Aschenbrenner, T.; Monetti, R.; Amigó, J. M.; Bunk, W.

    2013-06-01

    Ordinal symbolic dynamics has developed into a valuable method to describe complex systems. Recently, using the concept of transcripts, the coupling behaviour of systems was assessed, combining the properties of the symmetric group with information theoretic ideas. In this contribution, methods from the field of ordinal symbolic dynamics are applied to the characterisation of audio data. Coupling complexity between frequency bands of solo violin music, as a fingerprint of the instrument, is used for classification purposes within a support vector machine scheme. Our results suggest that coupling complexity is able to capture essential characteristics, sufficient to distinguish among different violins.

  14. Empirical Histograms in Item Response Theory with Ordinal Data

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2007-01-01

    The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…

  15. A solar radiation model for use in climate studies

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah

    1992-01-01

    A solar radiation routine is developed for use in climate studies that includes absorption and scattering due to ozone, water vapor, oxygen, carbon dioxide, clouds, and aerosols. Rayleigh scattering is also included. Broadband parameterization is used to compute the absorption by water vapor in a clear atmosphere, and the k-distribution method is applied to compute fluxes in a scattering atmosphere. The reflectivity and transmissivity of a scattering layer are computed analytically using the delta-four-stream discrete-ordinate approximation. The two-stream adding method is then applied to compute fluxes for a composite of clear and scattering layers. Compared to the results of high spectral resolution and detailed multiple-scattering calculations, fluxes and heating rate are accurately computed to within a few percent. The high accuracy of the flux and heating-rate calculations is achieved with a reasonable amount of computing time. With the UV and visible region grouped into four bands, this solar radiation routine is useful not only for climate studies but also for studies on photolysis in the upper atmosphere and photosynthesis in the biosphere.

  16. Non-gray gas radiation effect on mixed convection in lid driven square cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherifi, Mohammed, E-mail: production1998@yahoo.fr; Benbrik, Abderrahmane, E-mail: abenbrik@umbb.dz; Laouar-Meftah, Siham, E-mail: laouarmeftah@gmail.com

    A numerical study is performed to investigate the effect of non-gray radiation on mixed convection in a vertical two sided lid driven square cavity filled with air-H{sub 2}O-CO{sub 2} gas mixture. The vertical moving walls of the enclosure are maintained at two different but uniform temperatures. The horizontal walls are thermally insulated and considered as adiabatic walls. The governing differential equations are solved by a finite-volume method and the SIMPLE algorithm was adopted to solve the pressure–velocity coupling. The radiative transfer equation (RTE) is solved by the discrete ordinates method (DOM). The spectral line weighted sum of gray gases modelmore » (SLW) is used to account for non-gray radiation properties. Simulations are performed in configurations where thermal and shear forces induce cooperating buoyancy forces. Streamlines, isotherms, and Nusselt number are analyzed for three different values of Richardson’s number (from 0.1 to 10) and by considering three different medium (transparent medium, gray medium using the Planck mean absorption coefficient, and non-gray medium assumption).« less

  17. Estimation of the latent mediated effect with ordinal data using the limited-information and Bayesian full-information approaches.

    PubMed

    Chen, Jinsong; Zhang, Dake; Choi, Jaehwa

    2015-12-01

    It is common to encounter latent variables with ordinal data in social or behavioral research. Although a mediated effect of latent variables (latent mediated effect, or LME) with ordinal data may appear to be a straightforward combination of LME with continuous data and latent variables with ordinal data, the methodological challenges to combine the two are not trivial. This research covers model structures as complex as LME and formulates both point and interval estimates of LME for ordinal data using the Bayesian full-information approach. We also combine weighted least squares (WLS) estimation with the bias-corrected bootstrapping (BCB; Efron Journal of the American Statistical Association, 82, 171-185, 1987) method or the traditional delta method as the limited-information approach. We evaluated the viability of these different approaches across various conditions through simulation studies, and provide an empirical example to illustrate the approaches. We found that the Bayesian approach with reasonably informative priors is preferred when both point and interval estimates are of interest and the sample size is 200 or above.

  18. Year End Progress Report on Rattlesnake Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yaqi; DeHart, Mark David; Gleicher, Frederick Nathan

    Rattlesnake is a MOOSE-based radiation transport application developed at INL to support modern multi-physics simulations. At the beginning of the last year, Rattlesnake was able to perform steady-state, transient and eigenvalue calculations for the multigroup radiation transport equations. Various discretization schemes, including continuous finite element method (FEM) with discrete ordinates method (SN) and spherical harmonics expansion method (PN) for the self-adjoint angular flux (SAAF) formulation, continuous FEM (CFEM) with SN for the least square (LS) formulation, diffusion approximation with CFEM and discontinuous FEM (DFEM), have been implemented. A separate toolkit, YAKXS, for multigroup cross section management was developed to supportmore » Rattlesnake calculations with feedback both from changes in the field variables, such as fuel temperature, coolant density, and etc., and in isotope inventory. The framework for doing nonlinear diffusion acceleration (NDA) within Rattlesnake has been set up, and both NDA calculations with SAAF-SN-CFEM scheme and Monte Carlo with OpenMC have been performed. It was also used for coupling BISON and RELAP-7 for the full-core multiphysics simulations. Within the last fiscal year, significant improvements have been made in Rattlesnake. Rattlesnake development was migrated into our internal GITLAB development environment at the end of year 2014. Since then total 369 merge requests has been accepted into Rattlesnake. It is noted that the MOOSE framework that Rattlesnake is based on is under continuous developments. Improvements made in MOOSE can improve the Rattlesnake. It is acknowledged that MOOSE developers spent efforts on patching Rattlesnake for the improvements made on the framework side. This report will not cover the code restructuring for better readability and modularity and documentation improvements, which we have spent tremendous effort on. It only details some of improvements in the following sections.« less

  19. Earth's rotation irregularities derived from UTIBLI by method of multi-composing of ordinates

    NASA Astrophysics Data System (ADS)

    Segan, S.; Damjanov, I.; Surlan, B.

    Using the method of multi-composing of ordinates we have identified in Earth's rotation a long-periodic term with a period similar to the relaxation time of Chandler nutation. There was not enough information to assess its origin. We demonstrate that the method can be used even in the case when the data time span is comparable to the period of harmonic component.

  20. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    PubMed

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  1. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  2. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  3. Predicting multi-level drug response with gene expression profile in multiple myeloma using hierarchical ordinal regression.

    PubMed

    Zhang, Xinyan; Li, Bingzong; Han, Huiying; Song, Sha; Xu, Hongxia; Hong, Yating; Yi, Nengjun; Zhuang, Wenzhuo

    2018-05-10

    Multiple myeloma (MM), like other cancers, is caused by the accumulation of genetic abnormalities. Heterogeneity exists in the patients' response to treatments, for example, bortezomib. This urges efforts to identify biomarkers from numerous molecular features and build predictive models for identifying patients that can benefit from a certain treatment scheme. However, previous studies treated the multi-level ordinal drug response as a binary response where only responsive and non-responsive groups are considered. It is desirable to directly analyze the multi-level drug response, rather than combining the response to two groups. In this study, we present a novel method to identify significantly associated biomarkers and then develop ordinal genomic classifier using the hierarchical ordinal logistic model. The proposed hierarchical ordinal logistic model employs the heavy-tailed Cauchy prior on the coefficients and is fitted by an efficient quasi-Newton algorithm. We apply our hierarchical ordinal regression approach to analyze two publicly available datasets for MM with five-level drug response and numerous gene expression measures. Our results show that our method is able to identify genes associated with the multi-level drug response and to generate powerful predictive models for predicting the multi-level response. The proposed method allows us to jointly fit numerous correlated predictors and thus build efficient models for predicting the multi-level drug response. The predictive model for the multi-level drug response can be more informative than the previous approaches. Thus, the proposed approach provides a powerful tool for predicting multi-level drug response and has important impact on cancer studies.

  4. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Detection of illegal transfer of videos over the Internet

    NASA Astrophysics Data System (ADS)

    Chaisorn, Lekha; Sainui, Janya; Manders, Corey

    2010-07-01

    In this paper, a method for detecting infringements or modifications of a video in real-time is proposed. The method first segments a video stream into shots, after which it extracts some reference frames as keyframes. This process is performed employing a Singular Value Decomposition (SVD) technique developed in this work. Next, for each input video (represented by its keyframes), ordinal-based signature and SIFT (Scale Invariant Feature Transform) descriptors are generated. The ordinal-based method employs a two-level bitmap indexing scheme to construct the index for each video signature. The first level clusters all input keyframes into k clusters while the second level converts the ordinal-based signatures into bitmap vectors. On the other hand, the SIFT-based method directly uses the descriptors as the index. Given a suspect video (being streamed or transferred on the Internet), we generate the signature (ordinal and SIFT descriptors) then we compute similarity between its signature and those signatures in the database based on ordinal signature and SIFT descriptors separately. For similarity measure, besides the Euclidean distance, Boolean operators are also utilized during the matching process. We have tested our system by performing several experiments on 50 videos (each about 1/2 hour in duration) obtained from the TRECVID 2006 data set. For experiments set up, we refer to the conditions provided by TRECVID 2009 on "Content-based copy detection" task. In addition, we also refer to the requirements issued in the call for proposals by MPEG standard on the similar task. Initial result shows that our framework is effective and robust. As compared to our previous work, on top of the achievement we obtained by reducing the storage space and time taken in the ordinal based method, by introducing the SIFT features, we could achieve an overall accuracy in F1 measure of about 96% (improved about 8%).

  6. Coupled atmosphere/canopy model for remote sensing of plant reflectance features

    NASA Technical Reports Server (NTRS)

    Gerstl, S. A.; Zardecki, A.

    1985-01-01

    Solar radiative transfer through a coupled system of atmosphere and plant canopy is modeled as a multiple-scattering problem through a layered medium of random scatterers. The radiative transfer equation is solved by the discrete-ordinates finite-element method. Analytic expressions are derived that allow the calculation of scattering and absorption cross sections for any plant canopy layer form measurable biophysical parameters such as the leaf area index, leaf angle distribution, and individual leaf reflectance and transmittance data. An expression for a canopy scattering phase function is also given. Computational results are in good agreement with spectral reflectance measurements directly above a soybean canopy, and the concept of greenness- and brightness-transforms of Landsat MSS data is reconfirmed with the computed results. A sensitivity analysis with the coupled atmosphere/canopy model quantifies how satellite-sensed spectral radiances are affected by increased atmospheric aerosols, by varying leaf area index, by anisotropic leaf scattering, and by non-Lambertian soil boundary conditions. Possible extensions to a 2-D model are also discussed.

  7. Effects of Gravity on Soot Formation in a Coflow Laminar Methane/Air Diffusion Flame

    NASA Astrophysics Data System (ADS)

    Kong, Wenjun; Liu, Fengshan

    2010-04-01

    Simulations of a laminar coflow methane/air diffusion flame at atmospheric pressure are conducted to gain better understanding of the effects of gravity on soot formation by using detailed gas-phase chemistry, complex thermal and transport properties coupled with a semiempirical two-equation soot model and a nongray radiation model. Soot oxidation by O2, OH and O was considered. Thermal radiation was calculated using the discrete ordinate method coupled with a statistical narrow-band correlated-K model. The spectral absorption coefficient of soot was obtained by Rayleigh's theory for small particles. The results show that the peak temperature decreases with the decrease of the gravity level. The peak soot volume fraction in microgravity is about twice of that in normal gravity under the present conditions. The numerical results agree very well with available experimental results. The predicted results also show that gravity affects the location and intensity for soot nucleation and surface growth.

  8. Multidimensional Modeling of Atmospheric Effects and Surface Heterogeneities on Remote Sensing

    NASA Technical Reports Server (NTRS)

    Gerstl, S. A. W.; Simmer, C.; Zardecki, A. (Principal Investigator)

    1985-01-01

    The overall goal of this project is to establish a modeling capability that allows a quantitative determination of atmospheric effects on remote sensing including the effects of surface heterogeneities. This includes an improved understanding of aerosol and haze effects in connection with structural, angular, and spatial surface heterogeneities. One important objective of the research is the possible identification of intrinsic surface or canopy characteristics that might be invariant to atmospheric perturbations so that they could be used for scene identification. Conversely, an equally important objective is to find a correction algorithm for atmospheric effects in satellite-sensed surface reflectances. The technical approach is centered around a systematic model and code development effort based on existing, highly advanced computer codes that were originally developed for nuclear radiation shielding applications. Computational techniques for the numerical solution of the radiative transfer equation are adapted on the basis of the discrete-ordinates finite-element method which proved highly successful for one and two-dimensional radiative transfer problems with fully resolved angular representation of the radiation field.

  9. Radiative transfer code SHARM for atmospheric and terrestrial applications

    NASA Astrophysics Data System (ADS)

    Lyapustin, A. I.

    2005-12-01

    An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.

  10. Radiative transfer code SHARM for atmospheric and terrestrial applications.

    PubMed

    Lyapustin, A I

    2005-12-20

    An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.

  11. Normalization of a collimated 14.7 MeV neutron source in a neutron spectrometry system for benchmark experiments

    NASA Astrophysics Data System (ADS)

    Ofek, R.; Tsechanski, A.; Shani, G.

    1988-05-01

    In the present study a method used to normalize a collimated 14.7 MeV neutron beam is introduced. It combined a measurement of the fast neutron scalar flux passing through the collimator, using a copper foil activation, with a neutron transport calculation of the foil activation per unit source neutron, carried out by the discrete-ordinates transport code DOT 4.2. The geometry of the collimated neutron beam is composed of a D-T neutron source positioned 30 cm in front of a 6 cm diameter collimator, through a 120 cm thick paraffin wall. The neutron flux emitted from the D-T source was counted by an NE-213 scintillator, simultaneously with the irradiation of the copper foil. Thus, the determination of the normalization factor of the D-T source is used for an absolute flux calibration of the NE-213 scintillator. The major contributions to the uncertainty in the determination of the normalization factor, and their origins, are discussed.

  12. Investigation on combustion characteristics and NO formation of methane with swirling and non-swirling high temperature air

    NASA Astrophysics Data System (ADS)

    Li, Xing; Jia, Li

    2014-10-01

    Combustion characteristics of methane jet flames in an industrial burner working in high temperature combustion regime were investigated experimentally and numerically to clarify the effects of swirling high temperature air on combustion. Speziale-Sarkar-Gatski (SSG) Reynolds stress model, Eddy-Dissipation Model (EDM), Discrete Ordinates Method (DTM) combined with Weighted-Sum-of-Grey Gases Model (WSGG) were employed for the numerical simulation. Both Thermal-NO and Prompt-NO mechanism were considered to evaluate the NO formation. Temperature distribution, NO emissions by experiment and computation in swirling and non-swirling patterns show combustion characteristics of methane jet flames are totally different. Non-swirling high temperature air made high NO formation while significant NO prohibition were achieved by swirling high temperature air. Furthermore, velocity fields, dimensionless major species mole fraction distributions and Thermal-NO molar reaction rate profiles by computation interpret an inner exhaust gas recirculation formed in the combustion zone in swirling case.

  13. Overstatement in happiness reporting with ordinal, bounded scale.

    PubMed

    Tanaka, Saori C; Yamada, Katsunori; Kitada, Ryo; Tanaka, Satoshi; Sugawara, Sho K; Ohtake, Fumio; Sadato, Norihiro

    2016-02-18

    There are various methods by which people can express subjective evaluations quantitatively. For example, happiness can be measured on a scale from 1 to 10, and has been suggested as a measure of economic policy. However, there is resistance to these types of measurement from economists, who often regard welfare to be a cardinal, unbounded quantity. It is unclear whether there are differences between subjective evaluation reported on ordinal, bounded scales and on cardinal, unbounded scales. To answer this question, we developed functional magnetic resonance imaging experimental tasks for reporting happiness from monetary gain and the perception of visual stimulus. Subjects tended to report higher values when they used ordinal scales instead of cardinal scales. There were differences in neural activation between ordinal and cardinal reporting scales. The posterior parietal area showed greater activation when subjects used an ordinal scale instead of a cardinal scale. Importantly, the striatum exhibited greater activation when asked to report happiness on an ordinal scale than when asked to report on a cardinal scale. The finding that ordinal (bounded) scales are associated with higher reported happiness and greater activation in the reward system shows that overstatement bias in happiness data must be considered.

  14. Sample size and power estimation for studies with health related quality of life outcomes: a comparison of four methods using the SF-36.

    PubMed

    Walters, Stephen J

    2004-05-25

    We describe and compare four different methods for estimating sample size and power, when the primary outcome of the study is a Health Related Quality of Life (HRQoL) measure. These methods are: 1. assuming a Normal distribution and comparing two means; 2. using a non-parametric method; 3. Whitehead's method based on the proportional odds model; 4. the bootstrap. We illustrate the various methods, using data from the SF-36. For simplicity this paper deals with studies designed to compare the effectiveness (or superiority) of a new treatment compared to a standard treatment at a single point in time. The results show that if the HRQoL outcome has a limited number of discrete values (< 7) and/or the expected proportion of cases at the boundaries is high (scoring 0 or 100), then we would recommend using Whitehead's method (Method 3). Alternatively, if the HRQoL outcome has a large number of distinct values and the proportion at the boundaries is low, then we would recommend using Method 1. If a pilot or historical dataset is readily available (to estimate the shape of the distribution) then bootstrap simulation (Method 4) based on this data will provide a more accurate and reliable sample size estimate than conventional methods (Methods 1, 2, or 3). In the absence of a reliable pilot set, bootstrapping is not appropriate and conventional methods of sample size estimation or simulation will need to be used. Fortunately, with the increasing use of HRQoL outcomes in research, historical datasets are becoming more readily available. Strictly speaking, our results and conclusions only apply to the SF-36 outcome measure. Further empirical work is required to see whether these results hold true for other HRQoL outcomes. However, the SF-36 has many features in common with other HRQoL outcomes: multi-dimensional, ordinal or discrete response categories with upper and lower bounds, and skewed distributions, so therefore, we believe these results and conclusions using the SF-36 will be appropriate for other HRQoL measures.

  15. The reconstruction of atomic co-ordinates from a protein stereo ribbon diagram when additional information for sufficient sidechain positions is available

    NASA Astrophysics Data System (ADS)

    Lopes de Oliveira, Paulo Sérgio; Garratt, Richard Charles

    1998-11-01

    We describe the application of a method for the reconstruction of three-dimensional atomic co-ordinates from a stereo ribbon diagram of a protein when additional information for some of the sidechain positions is available. The method has applications in cases where the 3D co-ordinates have not been made available by any means other than the original publication and are of interest as models for molecular replacement, homology modelling etc. The approach is, on the one hand, more general than other methods which are based on stereo figures which present specific atomic positions, but on the other hand relies on input from a specialist. Its exact implementation will depend on the figure of interest. We have applied the method to the case of the α-d-galactose-binding lectin jacalin with a resultant RMS deviation, compared to the crystal structure, of 1.5 Å for the 133 Cα positions of the α-chain and 2.6 Å for the less regular β-chain. The success of the method depends on the secondary structure of the protein under consideration and the orientation of the stereo diagram itself but can be expected to reproduce the mainchain co-ordinates more accurately than the sidechains. Some ways in which the method may be generalised to other cases are discussed.

  16. DSSPcont: continuous secondary structure assignments for proteins

    PubMed Central

    Carter, Phil; Andersen, Claus A. F.; Rost, Burkhard

    2003-01-01

    The DSSP program automatically assigns the secondary structure for each residue from the three-dimensional co-ordinates of a protein structure to one of eight states. However, discrete assignments are incomplete in that they cannot capture the continuum of thermal fluctuations. Therefore, DSSPcont (http://cubic.bioc.columbia.edu/services/DSSPcont) introduces a continuous assignment of secondary structure that replaces ‘static’ by ‘dynamic’ states. Technically, the continuum results from calculating weighted averages over 10 discrete DSSP assignments with different hydrogen bond thresholds. A DSSPcont assignment for a particular residue is a percentage likelihood of eight secondary structure states, derived from a weighted average of the ten DSSP assignments. The continuous assignments have two important features: (i) they reflect the structural variations due to thermal fluctuations as detected by NMR spectroscopy; and (ii) they reproduce the structural variation between many NMR models from one single model. Therefore, functionally important variation can be extracted from a single X-ray structure using the continuous assignment procedure. PMID:12824310

  17. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention.

    PubMed

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M; Stuart, Elizabeth A

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration.

  18. Measuring information interactions on the ordinal pattern of stock time series

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaojun; Shang, Pengjian; Wang, Jing

    2013-02-01

    The interactions among time series as individual components of complex systems can be quantified by measuring to what extent they exchange information among each other. In many applications, one focuses not on the original series but on its ordinal pattern. In such cases, trivial noises appear more likely to be filtered and the abrupt influence of extreme values can be weakened. Cross-sample entropy and inner composition alignment have been introduced as prominent methods to estimate the information interactions of complex systems. In this paper, we modify both methods to detect the interactions among the ordinal pattern of stock return and volatility series, and we try to uncover the information exchanges across sectors in Chinese stock markets.

  19. Non-proportional odds multivariate logistic regression of ordinal family data.

    PubMed

    Zaloumis, Sophie G; Scurrah, Katrina J; Harrap, Stephen B; Ellis, Justine A; Gurrin, Lyle C

    2015-03-01

    Methods to examine whether genetic and/or environmental sources can account for the residual variation in ordinal family data usually assume proportional odds. However, standard software to fit the non-proportional odds model to ordinal family data is limited because the correlation structure of family data is more complex than for other types of clustered data. To perform these analyses we propose the non-proportional odds multivariate logistic regression model and take a simulation-based approach to model fitting using Markov chain Monte Carlo methods, such as partially collapsed Gibbs sampling and the Metropolis algorithm. We applied the proposed methodology to male pattern baldness data from the Victorian Family Heart Study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Linear Characteristic Spatial Quadrature for Discrete Ordinates Neutral Particle Transport on Arbitrary Triangles

    DTIC Science & Technology

    1993-06-01

    1•) + ) •,(v)(•,L) = ()(Q)+ sEXT (F). (4) The scalar flux, 0, is related to the angular flux, W, by (F)= f (dQ Vh) (5) and the particle current, J...J," v,p’) u +at(U, v) w(u, U, p’)= as(u, v) O(u, v) + SEXT (uv)] (92) 0 Ul,(V) I Assuming the area of the triangle is sufficiently small that cross...M + SEXT () (98) Wvn and WoUT are angular flux averages along the input and output edges, respectively, and are defined by WD Iv = f- ds. V(s.v) (99

  1. Co-Ordinating Education during Emergencies and Reconstruction: Challenges and Responsibilities

    ERIC Educational Resources Information Center

    Sommers, Marc

    2004-01-01

    While co-ordination is essentially a method of getting institutions to work together, it is clearly not synonymous with togetherness. Undercurrents of suspicion and distrust between individuals and institutional actors can affect important relationships and give rise to enduring misunderstandings and perplexing challenges. Turf battles involving…

  2. Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.

    PubMed

    Kim, Kyunghan; Guo, Zhixiong

    2007-05-01

    A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.

  3. Minimization of annotation work: diagnosis of mammographic masses via active learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-06-01

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  4. Minimization of annotation work: diagnosis of mammographic masses via active learning.

    PubMed

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-05-22

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  5. Impact of San Francisco’s Toy Ordinance on Restaurants and Children’s Food Purchases, 2011–2012

    PubMed Central

    Saelens, Brian E.; Kapphahn, Kristopher I.; Hekler, Eric B.; Buman, Matthew P.; Goldstein, Benjamin A.; Krukowski, Rebecca A.; O’Donohue, Laura S.; Gardner, Christopher D.; King, Abby C.

    2014-01-01

    Introduction In 2011, San Francisco passed the first citywide ordinance to improve the nutritional standards of children’s meals sold at restaurants by preventing the giving away of free toys or other incentives with meals unless nutritional criteria were met. This study examined the impact of the Healthy Food Incentives Ordinance at ordinance-affected restaurants on restaurant response (eg, toy-distribution practices, change in children’s menus), and the energy and nutrient content of all orders and children’s-meal–only orders purchased for children aged 0 through 12 years. Methods Restaurant responses were examined from January 2010 through March 2012. Parent–caregiver/child dyads (n = 762) who were restaurant customers were surveyed at 2 points before and 1 seasonally matched point after ordinance enactment at Chain A and B restaurants (n = 30) in 2011 and 2012. Results Both restaurant chains responded to the ordinance by selling toys separately from children’s meals, but neither changed their menus to meet ordinance-specified nutrition criteria. Among children for whom children’s meals were purchased, significant decreases in kilocalories, sodium, and fat per order were likely due to changes in children’s side dishes and beverages at Chain A. Conclusion Although the changes at Chain A did not appear to be directly in response to the ordinance, the transition to a more healthful beverage and default side dish was consistent with the intent of the ordinance. Study results underscore the importance of policy wording, support the concept that more healthful defaults may be a powerful approach for improving dietary intake, and suggest that public policies may contribute to positive restaurant changes. PMID:25032837

  6. Multivariate decoding of brain images using ordinal regression.

    PubMed

    Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F

    2013-11-01

    Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.

  7. Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.

    PubMed

    Li, Qiuju; Pan, Jianxin; Belcher, John

    2016-12-01

    In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.

  8. Hybrid parallel code acceleration methods in full-core reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courau, T.; Plagne, L.; Ponicot, A.

    2012-07-01

    When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less

  9. Economic incentives for oak woodland preservation and conservation

    Treesearch

    Rosi Dagit; Cy Carlberg; Christy Cuba; Thomas Scott

    2015-01-01

    Numerous ordinances and laws recognize the value of oak trees and woodlands, and dictate serious and expensive consequences for removing or harming them. Unfortunately, the methods used to calculate these values are equally numerous and often inconsistent. More important, these ordinances typically lack economic incentives to avoid impacts to oak woodland values...

  10. Contributions to the Underlying Bivariate Normal Method for Factor Analyzing Ordinal Data

    ERIC Educational Resources Information Center

    Xi, Nuo; Browne, Michael W.

    2014-01-01

    A promising "underlying bivariate normal" approach was proposed by Jöreskog and Moustaki for use in the factor analysis of ordinal data. This was a limited information approach that involved the maximization of a composite likelihood function. Its advantage over full-information maximum likelihood was that very much less computation was…

  11. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention

    PubMed Central

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M.; Stuart, Elizabeth A.

    2016-01-01

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration. PMID:27158217

  12. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  13. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  14. Communicating Risk with Parents: Exploring the Methods and Beliefs of Outdoor Education Coordinators in Victoria, Australia

    ERIC Educational Resources Information Center

    Dallat, Clare

    2009-01-01

    This paper examines the risk communication strategies currently being employed by seven outdoor education co-ordinators in Government schools in Victoria, Australia. Of particular interest are the beliefs and assumptions held by these co-ordinators in relation to communicating risk with parents. Current policy stipulates that parents must be…

  15. Rate My Stake: Interpretation of Ordinal Stake Ratings

    Treesearch

    Patricia Lebow; Grant Kirker

    2014-01-01

    Ordinal rating systems are commonly employed to evaluate biodeterioration of wood exposed outdoors over long periods of time. The purpose of these ratings is to compare the durability of test systems to nondurable wood products or known durable wood products. There are many reasons why these systems have evolved as the chosen method of evaluation, including having an...

  16. Forward Monte Carlo Computations of Polarized Microwave Radiation

    NASA Technical Reports Server (NTRS)

    Battaglia, A.; Kummerow, C.

    2000-01-01

    Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.

  17. Exponential Characteristic Spatial Quadrature for Discrete Ordinates Neutral Particle Transport in Slab Geometry

    DTIC Science & Technology

    1992-03-01

    left bdy = 0 vacuum current incident at left boundary = I type of current incident at left bdy = 0 isotropic surface Src region# cR SigmaR SourceR nc...0 type of current incident at left bdy = 0 isotropic surface Src region# cR SigmaR SourceR nc Right Bdy 1 0.5000 .3OD+00 0.0000D+00 256. 16.0000 2... SigmaR SourceR nc Right Bdy 1 0.1000 1.0000D+00 0.0000D+00 256. 16.0000 2 0.9500 1.0(OOD+00 1.0000D+00 256. 32.0000 type of right bdy = 0 vacuum current

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zardecki, A.

    The effect of multiple scattering on the validity of the Beer-Lambert law is discussed for a wide range of particle-size parameters and optical depths. To predict the amount of received radiant power, appropriate correction terms are introduced. For particles larger than or comparable to the wavelength of radiation, the small-angle approximation is adequate; whereas for small densely packed particles, the diffusion theory is advantageously employed. These two approaches are used in the context of the problem of laser-beam propagation in a dense aerosol medium. In addition, preliminary results obtained by using a two-dimensional finite-element discrete-ordinates transport code are described. Multiple-scatteringmore » effects for laser propagation in fog, cloud, rain, and aerosol cloud are modeled.« less

  19. Effect of temperature oscillation on thermal characteristics of an aluminum thin film

    NASA Astrophysics Data System (ADS)

    Ali, H.; Yilbas, B. S.

    2014-12-01

    Energy transport in aluminum thin film is examined due to temperature disturbance at the film edge. Thermal separation of electron and lattice systems is considered in the analysis, and temperature variation in each sub-system is formulated. The transient analysis of frequency-dependent and frequency-independent phonon radiative transport incorporating electron-phonon coupling is carried out in the thin film. The dispersion relations of aluminum are used in the frequency-dependent analysis. Temperature at one edge of the film is oscillated at various frequencies, and temporal response of phonon intensity distribution in the film is predicted numerically using the discrete ordinate method. To assess the phonon transport characteristics, equivalent equilibrium temperature is introduced. It is found that equivalent equilibrium temperature in the electron and lattice sub-systems oscillates due to temperature oscillation at the film edge. The amplitude of temperature oscillation reduces as the distance along the film thickness increases toward the low-temperature edge of the film. Equivalent equilibrium temperature attains lower values for the frequency-dependent solution of the phonon transport equation than that corresponding to frequency-independent solution.

  20. A new spherical model for computing the radiation field available for photolysis and heating at twilight

    NASA Technical Reports Server (NTRS)

    Dahlback, Arne; Stamnes, Knut

    1991-01-01

    Accurate computation of atmospheric photodissociation and heating rates is needed in photochemical models. These quantities are proportional to the mean intensity of the solar radiation penetrating to various levels in the atmosphere. For large solar zenith angles a solution of the radiative transfer equation valid for a spherical atmosphere is required in order to obtain accurate values of the mean intensity. Such a solution based on a perturbation technique combined with the discrete ordinate method is presented. Mean intensity calculations are carried out for various solar zenith angles. These results are compared with calculations from a plane parallel radiative transfer model in order to assess the importance of using correct geometry around sunrise and sunset. This comparison shows, in agreement with previous investigations, that for solar zenith angles less than 90 deg adequate solutions are obtained for plane parallel geometry as long as spherical geometry is used to compute the direct beam attenuation; but for solar zenith angles greater than 90 deg this pseudospherical plane parallel approximation overstimates the mean intensity.

  1. Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)

    NASA Technical Reports Server (NTRS)

    Warman, E. A.; Lindsey, B. A.

    1972-01-01

    The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.

  2. Experimental investigation on the infrared refraction and extinction properties of rock dust in tunneling face of coal mine.

    PubMed

    Wang, Wenzheng; Wang, Yanming; Shi, Guoqing

    2015-12-10

    Comprehensive experimental research on the fundamental optical properties of dust pollution in a coal mine is presented. Rock dust generated in a tunneling roadway was sampled and the spectral complex refractive index within an infrared range of 2.5-25 μm was obtained by Fourier transform infrared spectroscopy measurement and Kramers-Kronig relation. Experimental results were validated to be consistent with equivalent optical constants simulated by effective medium theory based on component analysis of x-ray fluorescence, which illustrates that the top three mineral components are SiO2 (62.06%), Al2O3 (21.26%), and Fe2O3 (4.27%). The complex refractive index and the spatial distribution tested by a filter dust and particle size analyzer were involved in the simulation of extinction properties of rock dust along the tunneling roadway solved by the discrete ordinates method and Mie scattering model. The compared results illustrate that transmission is obviously enhanced with the increase of height from the floor but weakened with increasing horizontal distance from the air duct.

  3. A New Analysis of the Spectra Obtained by the Venera Missions in the Venusian Atmosphere. I. The Analysis of the Data Received from the Venera-11 Probe at Altitudes Below 37 km in the 0.44 0.66 µm Wavelength Range

    NASA Astrophysics Data System (ADS)

    Maiorov, B. S.; Ignat'ev, N. I.; Moroz, V. I.; Zasova, L. V.; Moshkin, B. E.; Khatuntsev, I. V.; Ekonomov, A. P.

    2005-07-01

    The processes of the solar radiation extinction in deep layers of the Venus atmosphere in a wavelength range from 0.44 to 0.66 µm have been considered. The spectra of the solar radiation scattered in the atmosphere of Venus at various altitudes above the planetary surface measured by the Venera-11 entry probe in December 1978 are used as observational data. The problem of the data analysis is solved by selecting an atmospheric model; the discrete-ordinate method is applied in calculations. For the altitude interval from 2 10 km to 36 km, the altitude and spectral dependencies of the volume coefficient of true absorption have been obtained. At altitudes of 3 19 km, the spectral dependence is close to the wavelength dependence of the absorption cross section of S3 molecules, whence it follows that the mixing ratio of this sulfur allotrope increases with altitude from 0.03 to 0.1 ppbv.

  4. Tropical cyclone warm core analyses with FY-3 microwave temperature sounder data

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Bai, Jie; Zhang, Wenjun; Yan, Jun; Zhou, Zhuhua

    2014-05-01

    Space-borne microwave instruments are well suited to analyze Tropical Cyclone (TC) warm core structure, because certain wavelengths of microwave energy are able to penetrate the cirrus above TC. With the vector discrete-ordinate microwave radiative transfer model, the basic atmospheric parameters of Hurricane BOB are used to simulate the upwelling brightness temperatures on each channel of the Microwave Temperature Sounder (MWTS) onboard FY-3A/3B observation. Based on the simulation, the characteristic of 1109 super typhoon "Muifa" warm core structure is analyzed with the MWTS channel 3. Through the radiative and hydrostatic equation, TC warm core brightness temperature anomalies are related to surface pressure anomalies. In order to correct the radiation attenuation caused by MWTS scan geometric features, and improve the capability in capturing the relatively complete warm core radiation, a proposed algorithm is devised to correct the bias from receiving warm core microwave radiation, shows similar time-variant tendency with "Muifa" minimal sea level pressure as described by TC best track data. As the next generation of FY-3 satellite will be launched in 2012, this method will be further verified

  5. Crystallographic effects during radiative melting of semitransparent materials

    NASA Astrophysics Data System (ADS)

    Webb, B. W.; Viskanta, R.

    1987-10-01

    Experiments have been performed to illustrate crystallogrpahic effects during radiative melting of unconfined vertical layers of semitransparent material. Radiative melting of a polycrystalline paraffin was performed and the instantaneous layer weight and transmittance were measured using a cantilever beam technique and thermopile radiation detector, respectively. The effects of radiative flux, initial solid subcooling, spectral distribution of the irradiation, and crystal structure of the solid as determined qualitatively by the sample solidification rate were studied. Experimental results show conclusively the dominant influence of cystallographic effects in the form of multiple internal scattering of radiation during the melting process. A theoretical model is formulated to predict the melting rate of the material. Radiation transfer is treated by solving the one-dimensional radiative transfer equation for an absorbing-scattering medium using the discrete ordinates method. Melting rate and global layer reflectance as predicted by the model agree well with experimental data. Parametric studies conducted with the model illustrate the sensitivity of the melting behavior to such variables as incident radiative flux, initial layer opacity (material extinction coefficient), and scattering asymmetry factor.

  6. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  7. The compulsory psychiatric regime in Hong Kong: Constitutional and ethical perspectives.

    PubMed

    Cheung, Daisy

    This article examines the compulsory psychiatric regime in Hong Kong. Under section 36 of the Mental Health Ordinance, which authorises long-term detention of psychiatric patients, a District Judge is required to countersign the form filled out by the registered medical practitioners in order for the detention to be valid. Case law, however, has shown that the role of the District Judge is merely administrative. This article suggests that, as it currently stands, the compulsory psychiatric regime in Hong Kong is unconstitutional because it fails the proportionality test. In light of this conclusion, the author proposes two solutions to deal with the issue, by common law or by legislative reform. The former would see an exercise of discretion by the courts read into section 36, while the latter would involve piecemeal reform of the relevant provisions to give the courts an explicit discretion to consider substantive issues when reviewing compulsory detention applications. The author argues that these solutions would introduce effective judicial supervision into the compulsory psychiatric regime and safeguard against abuse of process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Bounded influence function based inference in joint modelling of ordinal partial linear model and accelerated failure time model.

    PubMed

    Chakraborty, Arindom

    2016-12-01

    A common objective in longitudinal studies is to characterize the relationship between a longitudinal response process and a time-to-event data. Ordinal nature of the response and possible missing information on covariates add complications to the joint model. In such circumstances, some influential observations often present in the data may upset the analysis. In this paper, a joint model based on ordinal partial mixed model and an accelerated failure time model is used, to account for the repeated ordered response and time-to-event data, respectively. Here, we propose an influence function-based robust estimation method. Monte Carlo expectation maximization method-based algorithm is used for parameter estimation. A detailed simulation study has been done to evaluate the performance of the proposed method. As an application, a data on muscular dystrophy among children is used. Robust estimates are then compared with classical maximum likelihood estimates. © The Author(s) 2014.

  9. A comparison of bivariate, multivariate random-effects, and Poisson correlated gamma-frailty models to meta-analyze individual patient data of ordinal scale diagnostic tests.

    PubMed

    Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea

    2017-11-01

    Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Ordinal pattern statistics for the assessment of heart rate variability

    NASA Astrophysics Data System (ADS)

    Graff, G.; Graff, B.; Kaczkowska, A.; Makowiec, D.; Amigó, J. M.; Piskorski, J.; Narkiewicz, K.; Guzik, P.

    2013-06-01

    The recognition of all main features of a healthy heart rhythm (the so-called sinus rhythm) is still one of the biggest challenges in contemporary cardiology. Recently the interesting physiological phenomenon of heart rate asymmetry has been observed. This phenomenon is related to unbalanced contributions of heart rate decelerations and accelerations to heart rate variability. In this paper we apply methods based on the concept of ordinal pattern to the analysis of electrocardiograms (inter-peak intervals) of healthy subjects in the supine position. This way we observe new regularities of the heart rhythm related to the distribution of ordinal patterns of lengths 3 and 4.

  11. Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heng, Kevin; Kitzmann, Daniel, E-mail: kevin.heng@csh.unibe.ch, E-mail: daniel.kitzmann@csh.unibe.ch

    We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols. We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method. This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity. At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetrymore » factor. We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method. We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level. Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon dioxide ice particles. The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models. In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method.« less

  12. Dissimilarity measure based on ordinal pattern for physiological signals

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Shang, Pengjian; Shi, Wenbin; Cui, Xingran

    2016-08-01

    Complex physiologic signals may carry information of their underlying mechanisms. In this paper, we introduce a dissimilarity measure to capture the features of underlying dynamics from various types of physiologic signals based on rank order statistics of ordinal patterns. Simulated 1/f noise and white noise are used to evaluate the effect of data length, embedding dimension and time delay on this measure. We then apply this measure to different physiologic signals. The method can successfully characterize the unique underlying patterns of subjects at similar physiologic states. It can also serve as a good discriminative tool for the healthy young, healthy elderly, congestive heart failure, atrial fibrilation and white noise groups. Furthermore, when investigated into the details of underlying ordinal patterns for each group, it is found that the distributions of ordinal patterns varies significantly for healthy and pathologic states, as well as aging.

  13. Development of an atmospheric infrared radiation model with high clouds for target detection

    NASA Astrophysics Data System (ADS)

    Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin

    2016-10-01

    In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.

  14. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  15. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  16. Preventing homicide: an evaluation of the efficacy of a Detroit gun ordinance.

    PubMed Central

    O'Carroll, P W; Loftin, C; Waller, J B; McDowall, D; Bukoff, A; Scott, R O; Mercy, J A; Wiersema, B

    1991-01-01

    BACKGROUND: In November 1986, a Detroit, Michigan city ordinance requiring mandatory jail sentences for illegally carrying a firearm in public was passed to preserve "the public peace, health, safety, and welfare of the people." METHODS: We conducted a set of interrupted time-series analyses to evaluate the impact of the law on the incidence of homicides, hypothesizing that the ordinance, by its nature, would affect only firearm homicides and homicides committed outside (e.g., on the street). RESULTS: The incidence of homicide in general increased after the law was passed, but the increases in non-firearm homicides and homicides committed inside (e.g., in a home) were either statistically significant or approached statistical significance (p = .006 and p = .070, respectively), whereas changes in the incidence of firearm homicides and homicides committed outside were not statistically significant (p = .238 and p = .418, respectively). We also determined that the ordinance was essentially unenforced, apparently because of a critical shortage of jail space. CONCLUSIONS: Our findings are consistent with a model in which the ordinance had a dampening effect on firearm homicides occurring in public in Detroit. The apparent preventive effect evident in the time series analyses may have been due to publicity about the ordinance, whereas the small nature of the effect may have been due to the lack of enforcement. PMID:2014857

  17. The impact of ordinate scaling on the visual analysis of single-case data.

    PubMed

    Dart, Evan H; Radley, Keith C

    2017-08-01

    Visual analysis is the primary method for detecting the presence of treatment effects in graphically displayed single-case data and it is often referred to as the "gold standard." Although researchers have developed standards for the application of visual analysis (e.g., Horner et al., 2005), over- and underestimation of effect size magnitude is not uncommon among analysts. Several characteristics have been identified as potential contributors to these errors; however, researchers have largely focused on characteristics of the data itself (e.g., autocorrelation), paying less attention to characteristics of the graphic display which are largely in control of the analyst (e.g., ordinate scaling). The current study investigated the impact that differences in ordinate scaling, a graphic display characteristic, had on experts' accuracy in judgments regarding the magnitude of effect present in single-case percentage data. 32 participants were asked to evaluate eight ABAB data sets (2 each presenting null, small, moderate, and large effects) along with three iterations of each (32 graphs in total) in which only the ordinate scale was manipulated. Results suggest that raters are less accurate in their detection of treatment effects as the ordinate scale is constricted. Additionally, raters were more likely to overestimate the size of a treatment effect when the ordinate scale was constricted. Copyright © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  18. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  19. Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry

    NASA Astrophysics Data System (ADS)

    Osborne, B. P.; Osborne, V. J.; Kruger, M. L.

    Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.

  20. Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Godoy, William F.; Liu, Xu

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.

  1. Development and validation of P-MODTRAN7 and P-MCScene, 1D and 3D polarimetric radiative transfer models

    NASA Astrophysics Data System (ADS)

    Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.

    2016-05-01

    A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.

  2. Ordinal logistic regression analysis on the nutritional status of children in KarangKitri village

    NASA Astrophysics Data System (ADS)

    Ohyver, Margaretha; Yongharto, Kimmy Octavian

    2015-09-01

    Ordinal logistic regression is a statistical technique that can be used to describe the relationship between ordinal response variable with one or more independent variables. This method has been used in various fields including in the health field. In this research, ordinal logistic regression is used to describe the relationship between nutritional status of children with age, gender, height, and family status. Nutritional status of children in this research is divided into over nutrition, well nutrition, less nutrition, and malnutrition. The purpose for this research is to describe the characteristics of children in the KarangKitri Village and to determine the factors that influence the nutritional status of children in the KarangKitri village. There are three things that obtained from this research. First, there are still children who are not categorized as well nutritional status. Second, there are children who come from sufficient economic level which include in not normal status. Third, the factors that affect the nutritional level of children are age, family status, and height.

  3. Spectral distribution of solar radiation

    NASA Technical Reports Server (NTRS)

    Mecherikunnel, A. T.; Richmond, J.

    1980-01-01

    Available quantitative data on solar total and spectral irradiance are examined in the context of utilization of solar irradiance for terrestrial applications of solar energy. The extraterrestrial solar total and spectral irradiance values are also reviewed. Computed values of solar spectral irradiance at ground level for different air mass values and various levels of atmospheric pollution or turbidity are presented. Wavelengths are given for computation of solar, absorptance, transmittance and reflectance by the 100 selected-ordinate method and by the 50 selected-ordinate method for air mass 1.5 and 2 solar spectral irradiance for the four levels of atmospheric pollution.

  4. Numerical developments for short-pulsed Near Infra-Red laser spectroscopy. Part I: direct treatment

    NASA Astrophysics Data System (ADS)

    Boulanger, Joan; Charette, André

    2005-03-01

    This two part study is devoted to the numerical treatment of short-pulsed laser near infra-red spectroscopy. The overall goal is to address the possibility of numerical inverse treatment based on a recently developed direct model to solve the transient radiative transfer equation. This model has been constructed in order to incorporate the last improvements in short-pulsed laser interaction with semi-transparent media and combine a discrete ordinates computing of the implicit source term appearing in the radiative transfer equation with an explicit treatment of the transport of the light intensity using advection schemes, a method encountered in reactive flow dynamics. The incident collimated beam is analytically solved through Bouger Beer Lambert extinction law. In this first part, the direct model is extended to fully non-homogeneous materials and tested with two different spatial schemes in order to be adapted to the inversion methods presented in the following second part. As a first point, fundamental methods and schemes used in the direct model are presented. Then, tests are conducted by comparison with numerical simulations given as references. In a third and last part, multi-dimensional extensions of the code are provided. This allows presentation of numerical results of short pulses propagation in 1, 2 and 3D homogeneous and non-homogeneous materials given some parametrical studies on medium properties and pulse shape. For comparison, an integral method adapted to non-homogeneous media irradiated by a pulsed laser beam is also developed for the 3D case.

  5. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  6. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  7. Analysis of noise-induced temporal correlations in neuronal spike sequences

    NASA Astrophysics Data System (ADS)

    Reinoso, José A.; Torrent, M. C.; Masoller, Cristina

    2016-11-01

    We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.

  8. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DOmore » method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  9. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE PAGES

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; ...

    2017-10-03

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  10. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. In this paper, we carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to bothmore » methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Finally, included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.« less

  11. A Detailed Comparison of Multidimensional Boltzmann Neutrino Transport Methods in Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Richers, Sherwood; Nagakura, Hiroki; Ott, Christian D.; Dolence, Joshua; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-10-01

    The mechanism driving core-collapse supernovae is sensitive to the interplay between matter and neutrino radiation. However, neutrino radiation transport is very difficult to simulate, and several radiation transport methods of varying levels of approximation are available. We carefully compare for the first time in multiple spatial dimensions the discrete ordinates (DO) code of Nagakura, Yamada, and Sumiyoshi and the Monte Carlo (MC) code Sedonu, under the assumptions of a static fluid background, flat spacetime, elastic scattering, and full special relativity. We find remarkably good agreement in all spectral, angular, and fluid interaction quantities, lending confidence to both methods. The DO method excels in determining the heating and cooling rates in the optically thick region. The MC method predicts sharper angular features due to the effectively infinite angular resolution, but struggles to drive down noise in quantities where subtractive cancellation is prevalent, such as the net gain in the protoneutron star and off-diagonal components of the Eddington tensor. We also find that errors in the angular moments of the distribution functions induced by neglecting velocity dependence are subdominant to those from limited momentum-space resolution. We briefly compare directly computed second angular moments to those predicted by popular algebraic two-moment closures, and we find that the errors from the approximate closures are comparable to the difference between the DO and MC methods. Included in this work is an improved Sedonu code, which now implements a fully special relativistic, time-independent version of the grid-agnostic MC random walk approximation.

  12. Numerical investigations of low-density nozzle flow by solving the Boltzmann equation

    NASA Technical Reports Server (NTRS)

    Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen

    1995-01-01

    A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.

  13. No association of smoke-free ordinances with profits from bingo and charitable games in Massachusetts

    PubMed Central

    Glantz, S; Wilson-Loots, R

    2003-01-01

    Background: Because it is widely played, claims that smoking restrictions will adversely affect bingo games is used as an argument against these policies. We used publicly available data from Massachusetts to assess the impact of 100% smoke-free ordinances on profits from bingo and other gambling sponsored by charitable organisations between 1985 and 2001. Methods: We conducted two analyses: (1) a general linear model implementation of a time series analysis with net profits (adjusted to 2001 dollars) as the dependent variable, and community (as a fixed effect), year, lagged net profits, and the length of time the ordinance had been in force as the independent variables; (2) multiple linear regression of total state profits against time, lagged profits, and the percentage of the entire state population in communities that allow charitable gaming but prohibit smoking. Results: The general linear model analysis of data from individual communities showed that, while adjusted profits fell over time, this effect was not related to the presence of an ordinance. The analysis in terms of the fraction of the population living in communities with ordinances yielded the same result. Conclusion: Policymakers can implement smoke-free policies without concern that these policies will affect charitable gaming. PMID:14660778

  14. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  15. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  16. Evaluation of redundancy analysis to identify signatures of local adaptation.

    PubMed

    Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric

    2018-05-26

    Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Rethinking Residue: Determining the Perceptual Continuum of Residue on FEES to Enable Better Measurement.

    PubMed

    Pisegna, Jessica M; Kaneoka, Asako; Leonard, Rebecca; Langmore, Susan E

    2018-02-01

    The goal of this work was to better understand perceptual judgments of pharyngeal residue on flexible endoscopic evaluation of swallowing (FEES) and the influence of a visual analog scale (VAS) versus an ordinal scale on clinician ratings. The intent was to determine if perceptual judgments of residue were more accurately described by equal or unequal intervals. Thirty-three speech language pathologists rated pharyngeal residue from 75 FEES videos representing a wide range of residue severities for thin liquid, applesauce, and cracker boluses. Clinicians rated their impression of the overall residue amount in each video on a VAS and, in a different session, on a five-point ordinal scale. Residue ratings were made in two separate sessions separated by several weeks. Statistical correlations of the two rating methods were carried out and best-fit models were determined for each bolus type. A total of 2475 VAS ratings and 2473 ordinal ratings were collected. Residue ratings from both methods (VAS and ordinal) were strongly correlated for all bolus types. The best fit for the data was a quadratic model representing unequal intervals, which significantly improved the r 2 values for each bolus type (cracker r 2  = 0.98, applesauce r 2  = 0.99, thin liquid r 2  = 0.98, all p < 0.0001). Perceptual ratings of pharyngeal residue demonstrated a statistical relationship consistent with unequal intervals. The present findings support the use of a VAS to rate residue on FEES, allowing for greater precision as compared to traditional ordinal rating scales. Perceptual judgments of pharyngeal residue reflected unequal intervals, an important concept that should be considered in future rating scales.

  18. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  19. MONET: multidimensional radiative cloud scene model

    NASA Astrophysics Data System (ADS)

    Chervet, Patrick

    1999-12-01

    All cloud fields exhibit variable structures (bulge) and heterogeneities in water distributions. With the development of multidimensional radiative models by the atmospheric community, it is now possible to describe horizontal heterogeneities of the cloud medium, to study these influences on radiative quantities. We have developed a complete radiative cloud scene generator, called MONET (French acronym for: MOdelisation des Nuages En Tridim.) to compute radiative cloud scene from visible to infrared wavelengths for various viewing and solar conditions, different spatial scales, and various locations on the Earth. MONET is composed of two parts: a cloud medium generator (CSSM -- Cloud Scene Simulation Model) developed by the Air Force Research Laboratory, and a multidimensional radiative code (SHDOM -- Spherical Harmonic Discrete Ordinate Method) developed at the University of Colorado by Evans. MONET computes images for several scenario defined by user inputs: date, location, viewing angles, wavelength, spatial resolution, meteorological conditions (atmospheric profiles, cloud types)... For the same cloud scene, we can output different viewing conditions, or/and various wavelengths. Shadowing effects on clouds or grounds are taken into account. This code is useful to study heterogeneity effects on satellite data for various cloud types and spatial resolutions, and to determine specifications of new imaging sensor.

  20. A Kinetics Model for KrF Laser Amplifiers

    NASA Astrophysics Data System (ADS)

    Giuliani, J. L.; Kepple, P.; Lehmberg, R.; Obenschain, S. P.; Petrov, G.

    1999-11-01

    A computer kinetics code has been developed to model the temporal and spatial behavior of an e-beam pumped KrF laser amplifier. The deposition of the primary beam electrons is assumed to be spatially uniform and the energy distribution function of the nascent electron population is calculated to be near Maxwellian below 10 eV. For an initial Kr/Ar/F2 composition, the code calculates the densities of 24 species subject to over 100 reactions with 1-D spatial resolution (typically 16 zones) along the longitudinal lasing axis. Enthalpy accounting for each process is performed to partition the energy into internal, thermal, and radiative components. The electron as well as the heavy particle temperatures are followed for energy conservation and excitation rates. Transport of the lasing photons is performed along the axis on a dense subgrid using the method of characteristics. Amplified spontaneous emission is calculated using a discrete ordinates approach and includes contributions to the local intensity from the whole amplifier volume. Specular reflection off side walls and the rear mirror are included. Results of the model will be compared with data from the NRL NIKE laser and other published results.

  1. The emergence of temporal language in Nicaraguan Sign Language

    PubMed Central

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-01-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. PMID:27591549

  2. The emergence of temporal language in Nicaraguan Sign Language.

    PubMed

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-11-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  4. Tobacco industry litigation to deter local public health ordinances: the industry usually loses in court

    PubMed Central

    Nixon, M; Mahmoud, L; Glantz, S

    2004-01-01

    Background: The tobacco industry uses claims of state preemption or violations of the US Constitution in litigation to overturn local tobacco control ordinances. Methods: Collection of lawsuits filed or threatened against local governments in the USA; review of previously secret tobacco industry documents; interviews with key informants. Results: The industry is most likely to prevail when a court holds that there is explicit preemption language by the state legislature to exclusively regulate tobacco. The industry has a much weaker record on claims of implied preemption and has lost all challenges brought under equal protection claims in the cases we located. Although the tobacco industry is willing to spend substantial amounts of money on these lawsuits, it never won on constitutional equal protection grounds and lost or dropped 60% (16/27) of the cases it brought claiming implied state preemption. Conclusions: Municipalities should continue to pass ordinances and be prepared to defend them against claims of implied preemption or on constitutional grounds. If the ordinance is properly prepared they will likely prevail. Health advocates should be prepared to assist in this process. PMID:14985600

  5. Processing Ordinality and Quantity: The Case of Developmental Dyscalculia

    PubMed Central

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made “ordered” or “non-ordered” judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information. PMID:21935374

  6. Processing ordinality and quantity: the case of developmental dyscalculia.

    PubMed

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.

  7. Underage alcohol policies across 50 California cities: an assessment of best practices

    PubMed Central

    2012-01-01

    Background We pursue two primary goals in this article: (1) to test a methodology and develop a dataset on U.S. local-level alcohol policy ordinances, and (2) to evaluate the presence, comprehensiveness, and stringency of eight local alcohol policies in 50 diverse California cities in relationship to recommended best practices in both public health literature and governmental recommendations to reduce underage drinking. Methods Following best practice recommendations from a wide array of authoritative sources, we selected eight local alcohol policy topics (e.g., conditional use permits, responsible beverage service training, social host ordinances, window/billboard advertising ordinances), and determined the presence or absence as well as the stringency (restrictiveness) and comprehensiveness (number of provisions) of each ordinance in each of the 50 cities in 2009. Following the alcohol policy literature, we created scores for each city on each type of ordinance and its associated components. We used these data to evaluate the extent to which recommendations for best practices to reduce underage alcohol use are being followed. Results (1) Compiling datasets of local-level alcohol policy laws and their comprehensiveness and stringency is achievable, even absent comprehensive, on-line, or other legal research tools. (2) We find that, with some exceptions, most of the 50 cities do not have high scores for presence, comprehensiveness, or stringency across the eight key policies. Critical policies such as responsible beverage service and deemed approved ordinances are uncommon, and, when present, they are generally neither comprehensive nor stringent. Even within policies that have higher adoption rates, central elements are missing across many or most cities’ ordinances. Conclusion This study demonstrates the viability of original legal data collection in the U.S. pertaining to local ordinances and of creating quantitative scores for each policy type to reflect comprehensiveness and stringency. Analysis of the resulting dataset reveals that, although the 50 cities have taken important steps to improve public health with regard to underage alcohol use and abuse, there is a great deal more that needs to be done to bring these cities into compliance with best practice recommendations. PMID:22734468

  8. Compatible Spatial Discretizations for Partial Differential Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Douglas, N, ed.

    From May 11--15, 2004, the Institute for Mathematics and its Applications held a hot topics workshop on Compatible Spatial Discretizations for Partial Differential Equations. The numerical solution of partial differential equations (PDE) is a fundamental task in science and engineering. The goal of the workshop was to bring together a spectrum of scientists at the forefront of the research in the numerical solution of PDEs to discuss compatible spatial discretizations. We define compatible spatial discretizations as those that inherit or mimic fundamental properties of the PDE such as topology, conservation, symmetries, and positivity structures and maximum principles. A wide varietymore » of discretization methods applied across a wide range of scientific and engineering applications have been designed to or found to inherit or mimic intrinsic spatial structure and reproduce fundamental properties of the solution of the continuous PDE model at the finite dimensional level. A profusion of such methods and concepts relevant to understanding them have been developed and explored: mixed finite element methods, mimetic finite differences, support operator methods, control volume methods, discrete differential forms, Whitney forms, conservative differencing, discrete Hodge operators, discrete Helmholtz decomposition, finite integration techniques, staggered grid and dual grid methods, etc. This workshop seeks to foster communication among the diverse groups of researchers designing, applying, and studying such methods as well as researchers involved in practical solution of large scale problems that may benefit from advancements in such discretizations; to help elucidate the relations between the different methods and concepts; and to generally advance our understanding in the area of compatible spatial discretization methods for PDE. Particular points of emphasis included: + Identification of intrinsic properties of PDE models that are critical for the fidelity of numerical simulations. + Identification and design of compatible spatial discretizations of PDEs, their classification, analysis, and relations. + Relationships between different compatible spatial discretization methods and concepts which have been developed; + Impact of compatible spatial discretizations upon physical fidelity, verification and validation of simulations, especially in large-scale, multiphysics settings. + How solvers address the demands placed upon them by compatible spatial discretizations. This report provides information about the program and abstracts of all the presentations.« less

  9. Multiple Scattering Principal Component-based Radiative Transfer Model (PCRTM) from Far IR to UV-Vis

    NASA Astrophysics Data System (ADS)

    Liu, X.; Wu, W.; Yang, Q.

    2017-12-01

    Modern satellite hyperspectral satellite remote sensors such as AIRS, CrIS, IASI, CLARREO all require accurate and fast radiative transfer models that can deal with multiple scattering of clouds and aerosols to explore the information contents. However, performing full radiative transfer calculations using multiple stream methods such as discrete ordinate (DISORT), doubling and adding (AD), successive order of scattering order of scattering (SOS) are very time consuming. We have developed a principal component-based radiative transfer model (PCRTM) to reduce the computational burden by orders of magnitudes while maintain high accuracy. By exploring spectral correlations, the PCRTM reduce the number of radiative transfer calculations in frequency domain. It further uses a hybrid stream method to decrease the number of calls to the computational expensive multiple scattering calculations with high stream numbers. Other fast parameterizations have been used in the infrared spectral region reduce the computational time to milliseconds for an AIRS forward simulation (2378 spectral channels). The PCRTM has been development to cover spectral range from far IR to UV-Vis. The PCRTM model have been be used for satellite data inversions, proxy data generation, inter-satellite calibrations, spectral fingerprinting, and climate OSSE. We will show examples of applying the PCRTM to single field of view cloudy retrievals of atmospheric temperature, moisture, traces gases, clouds, and surface parameters. We will also show how the PCRTM are used for the NASA CLARREO project.

  10. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  11. Bayesian Nonparametric Ordination for the Analysis of Microbial Communities.

    PubMed

    Ren, Boyu; Bacallado, Sergio; Favaro, Stefano; Holmes, Susan; Trippa, Lorenzo

    2017-01-01

    Human microbiome studies use sequencing technologies to measure the abundance of bacterial species or Operational Taxonomic Units (OTUs) in samples of biological material. Typically the data are organized in contingency tables with OTU counts across heterogeneous biological samples. In the microbial ecology community, ordination methods are frequently used to investigate latent factors or clusters that capture and describe variations of OTU counts across biological samples. It remains important to evaluate how uncertainty in estimates of each biological sample's microbial distribution propagates to ordination analyses, including visualization of clusters and projections of biological samples on low dimensional spaces. We propose a Bayesian analysis for dependent distributions to endow frequently used ordinations with estimates of uncertainty. A Bayesian nonparametric prior for dependent normalized random measures is constructed, which is marginally equivalent to the normalized generalized Gamma process, a well-known prior for nonparametric analyses. In our prior, the dependence and similarity between microbial distributions is represented by latent factors that concentrate in a low dimensional space. We use a shrinkage prior to tune the dimensionality of the latent factors. The resulting posterior samples of model parameters can be used to evaluate uncertainty in analyses routinely applied in microbiome studies. Specifically, by combining them with multivariate data analysis techniques we can visualize credible regions in ecological ordination plots. The characteristics of the proposed model are illustrated through a simulation study and applications in two microbiome datasets.

  12. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    DTIC Science & Technology

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  13. Discrete Trials Teaching

    ERIC Educational Resources Information Center

    Ghezzi, Patrick M.

    2007-01-01

    The advantages of emphasizing discrete trials "teaching" over discrete trials "training" are presented first, followed by a discussion of discrete trials as a method of teaching that emerged historically--and as a matter of necessity for difficult learners such as those with autism--from discrete trials as a method for laboratory research. The…

  14. A method of power analysis based on piecewise discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Xin, Miaomiao; Zhang, Yanchi; Xie, Da

    2018-04-01

    The paper analyzes the existing feature extraction methods. The characteristics of discrete Fourier transform and piecewise aggregation approximation are analyzed. Combining with the advantages of the two methods, a new piecewise discrete Fourier transform is proposed. And the method is used to analyze the lighting power of a large customer in this paper. The time series feature maps of four different cases are compared with the original data, discrete Fourier transform, piecewise aggregation approximation and piecewise discrete Fourier transform. This new method can reflect both the overall trend of electricity change and its internal changes in electrical analysis.

  15. Adopting a Patient-Centered Approach to Primary Outcome Analysis of Acute Stroke Trials by Use of a Utility-Weighted Modified Rankin Scale

    PubMed Central

    Chaisinanunkul, Napasri; Adeoye, Opeolu; Lewis, Roger J.; Grotta, James C.; Broderick, Joseph; Jovin, Tudor G.; Nogueira, Raul G.; Elm, Jordan; Graves, Todd; Berry, Scott; Lees, Kennedy R.; Barreto, Andrew D.; Saver, Jeffrey L.

    2015-01-01

    Background and Purpose Although the modified Rankin Scale (mRS) is the most commonly employed primary endpoint in acute stroke trials, its power is limited when analyzed in dichotomized fashion and its indication of effect size challenging to interpret when analyzed ordinally. Weighting the seven Rankin levels by utilities may improve scale interpretability while preserving statistical power. Methods A utility weighted mRS (UW-mRS) was derived by averaging values from time-tradeoff (patient centered) and person-tradeoff (clinician centered) studies. The UW-mRS, standard ordinal mRS, and dichotomized mRS were applied to 11 trials or meta-analyses of acute stroke treatments, including lytic, endovascular reperfusion, blood pressure moderation, and hemicraniectomy interventions. Results Utility values were: mRS 0–1.0; mRS 1 - 0.91; mRS 2 - 0.76; mRS 3 - 0.65; mRS 4 - 0.33; mRS 5 & 6 - 0. For trials with unidirectional treatment effects, the UW-mRS paralleled the ordinal mRS and outperformed dichotomous mRS analyses. Both the UW-mRS and the ordinal mRS were statistically significant in six of eight unidirectional effect trials, while dichotomous analyses were statistically significant in two to four of eight. In bidirectional effect trials, both the UW-mRS and ordinal tests captured the divergent treatment effects by showing neutral results whereas some dichotomized analyses showed positive results. Mean utility differences in trials with statistically significant positive results ranged from 0.026 to 0.249. Conclusion A utility-weighted mRS performs similarly to the standard ordinal mRS in detecting treatment effects in actual stroke trials and ensures the quantitative outcome is a valid reflection of patient-centered benefits. PMID:26138130

  16. London Measure of Unplanned Pregnancy: guidance for its use as an outcome measure

    PubMed Central

    Hall, Jennifer A; Barrett, Geraldine; Copas, Andrew; Stephenson, Judith

    2017-01-01

    Background The London Measure of Unplanned Pregnancy (LMUP) is a psychometrically validated measure of the degree of intention of a current or recent pregnancy. The LMUP is increasingly being used worldwide, and can be used to evaluate family planning or preconception care programs. However, beyond recommending the use of the full LMUP scale, there is no published guidance on how to use the LMUP as an outcome measure. Ordinal logistic regression has been recommended informally, but studies published to date have all used binary logistic regression and dichotomized the scale at different cut points. There is thus a need for evidence-based guidance to provide a standardized methodology for multivariate analysis and to enable comparison of results. This paper makes recommendations for the regression method for analysis of the LMUP as an outcome measure. Materials and methods Data collected from 4,244 pregnant women in Malawi were used to compare five regression methods: linear, logistic with two cut points, and ordinal logistic with either the full or grouped LMUP score. The recommendations were then tested on the original UK LMUP data. Results There were small but no important differences in the findings across the regression models. Logistic regression resulted in the largest loss of information, and assumptions were violated for the linear and ordinal logistic regression. Consequently, robust standard errors were used for linear regression and a partial proportional odds ordinal logistic regression model attempted. The latter could only be fitted for grouped LMUP score. Conclusion We recommend the linear regression model with robust standard errors to make full use of the LMUP score when analyzed as an outcome measure. Ordinal logistic regression could be considered, but a partial proportional odds model with grouped LMUP score may be required. Logistic regression is the least-favored option, due to the loss of information. For logistic regression, the cut point for un/planned pregnancy should be between nine and ten. These recommendations will standardize the analysis of LMUP data and enhance comparability of results across studies. PMID:28435343

  17. Comparison of two Galerkin quadrature methods

    DOE PAGES

    Morel, Jim E.; Warsa, James; Franke, Brian C.; ...

    2017-02-21

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  18. Comparison of two Galerkin quadrature methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, Jim E.; Warsa, James; Franke, Brian C.

    Here, we compare two methods for generating Galerkin quadratures. In method 1, the standard S N method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard S N method is used to generate the discrete-to-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwisemore » sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding S N equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.« less

  19. Squeezing Interval Change From Ordinal Panel Data: Latent Growth Curves With Ordinal Outcomes

    ERIC Educational Resources Information Center

    Mehta, Paras D.; Neale, Michael C.; Flay, Brian R.

    2004-01-01

    A didactic on latent growth curve modeling for ordinal outcomes is presented. The conceptual aspects of modeling growth with ordinal variables and the notion of threshold invariance are illustrated graphically using a hypothetical example. The ordinal growth model is described in terms of 3 nested models: (a) multivariate normality of the…

  20. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pautz, Shawn D.; Bailey, Teresa S.

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  1. Coexistence of cyclic (CH3OH)2(H2O)8 heterodecamer and acyclic water trimer in the channels of silver-azelate framework

    NASA Astrophysics Data System (ADS)

    Luo, Geng-Geng; Zhu, Rui-Min; He, Wei-Jun; Li, Ming-Zhi; Zhao, Qing-Hua; Li, Dong-Xu; Dai, Jing-Cao

    2012-08-01

    Flexible azelaic acid (H2aze) and 1,3-bis(4-pyridyl)propane) (bpp) react ultrasonically with silver(I) oxide, generating a new metal-organic framework [Ag2(bpp)2(aze)·7H2O·CH3OH]n (1) that forms a 3D supramolecular structure through H-bonding interactions between solvent molecules and carboxylate O atoms with void spaces. Two kinds of solvent clusters, discrete cyclic (CH3OH)2(H2O)8 heterodecameric and acyclic water trimeric clusters occupy the channels in the structure. Furthermore, 1 exhibits strong photoluminescence maximized at 500 nm upon 350 nm excitation at room temperature, of which CIE chromaticity ordinate (x = 0.28, y = 0.44) is close to that of edge of green component.

  2. Parallel deterministic transport sweeps of structured and unstructured meshes with overloaded mesh decompositions

    DOE PAGES

    Pautz, Shawn D.; Bailey, Teresa S.

    2016-11-29

    Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less

  3. Measured and calculated fast neutron spectra in a depleted uranium and lithium hydride shielded reactor

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.; Mueller, R. A.

    1973-01-01

    Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.

  4. Deregulation, Distrust, and Democracy: State and Local Action to Ensure Equitable Access to Healthy, Sustainably Produced Food.

    PubMed

    Wiley, Lindsay F

    2015-01-01

    Environmental, public health, alternative food, and food justice advocates are working together to achieve incremental agricultural subsidy and nutrition assistance reforms that increase access to fresh fruits and vegetables. When it comes to targeting food and beverage products for increased regulation and decreased consumption, however, the priorities of various food reform movements diverge. This article argues that foundational legal issues, including preemption of state and local authority to protect the public's health and welfare, increasing First Amendment protection for commercial speech, and eroding judicial deference to legislative policy judgments, present a more promising avenue for collaboration across movements than discrete food reform priorities around issues like sugary drinks, genetic modification, or organics. Using the Vermont Genetically Modified Organism (GMO) Labeling Act litigation, the Kauai GMO Cultivation Ordinance litigation, the New York City Sugary Drinks Portion Rule litigation, and the Cleveland Trans Fat Ban litigation as case studies, I discuss the foundational legal challenges faced by diverse food reformers, even when their discrete reform priorities diverge. I also 'explore the broader implications of cooperation among groups that respond differently to the "irrationalities" (from the public health perspective) or "values" (from the environmental and alternative food perspective) that permeate public risk perception for democratic governance in the face of scientific uncertainty.

  5. Development and preliminary validation of a questionnaire to measure satisfaction with home care in Greece: an exploratory factor analysis of polychoric correlations

    PubMed Central

    2010-01-01

    Background The primary aim of this study was to develop and psychometrically test a Greek-language instrument for measuring satisfaction with home care. The first empirical evidence about the level of satisfaction with these services in Greece is also provided. Methods The questionnaire resulted from literature search, on-site observation and cognitive interviews. It was applied in 2006 to a sample of 201 enrollees of five home care programs in the city of Thessaloniki and contains 31 items that measure satisfaction with individual service attributes and are expressed on a 5-point Likert scale. The latter has been usually considered in practice as an interval scale, although it is in principle ordinal. We thus treated the variable as an ordinal one, but also employed the traditional approach in order to compare the findings. Our analysis was therefore based on ordinal measures such as the polychoric correlation, Kendall's Tau b coefficient and ordinal Cronbach's alpha. Exploratory factor analysis was followed by an assessment of internal consistency reliability, test-retest reliability, construct validity and sensitivity. Results Analyses with ordinal and interval scale measures produced in essence very similar results and identified four multi-item scales. Three of these were found to be reliable and valid: socioeconomic change, staff skills and attitudes and service appropriateness. A fourth dimension -service planning- had lower internal consistency reliability and yet very satisfactory test-retest reliability, construct validity and floor and ceiling effects. The global satisfaction scale created was also quite reliable. Overall, participants were satisfied -yet not very satisfied- with home care services. More room for improvement seems to exist for the socio-economic and planning aspects of care and less for staff skills and attitudes and appropriateness of provided services. Conclusions The methods developed seem to be a promising tool for the measurement of home care satisfaction in Greece. PMID:20602759

  6. A single camera roentgen stereophotogrammetry method for static displacement analysis.

    PubMed

    Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G

    2000-06-01

    A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.

  7. Causal analysis of ordinal treatments and binary outcomes under truncation by death.

    PubMed

    Wang, Linbo; Richardson, Thomas S; Zhou, Xiao-Hua

    2017-06-01

    It is common that in multi-arm randomized trials, the outcome of interest is "truncated by death," meaning that it is only observed or well-defined conditioning on an intermediate outcome. In this case, in addition to pairwise contrasts, the joint inference for all treatment arms is also of interest. Under a monotonicity assumption we present methods for both pairwise and joint causal analyses of ordinal treatments and binary outcomes in presence of truncation by death. We illustrate via examples the appropriateness of our assumptions in different scientific contexts.

  8. An approach to solve group-decision-making problems with ordinal interval numbers.

    PubMed

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  9. [A correction method of baseline drift of discrete spectrum of NIR].

    PubMed

    Hu, Ai-Qin; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-10-01

    In the present paper, a new correction method of baseline drift of discrete spectrum is proposed by combination of cubic spline interpolation and first order derivative. A fitting spectrum is constructed by cubic spline interpolation, using the datum in discrete spectrum as interpolation nodes. The fitting spectrum is differentiable. First order derivative is applied to the fitting spectrum to calculate derivative spectrum. The spectral wavelengths which are the same as the original discrete spectrum were taken out from the derivative spectrum to constitute the first derivative spectra of the discrete spectra, thereby to correct the baseline drift of the discrete spectra. The effects of the new method were demonstrated by comparison of the performances of multivariate models built using original spectra, direct differential spectra and the spectra pretreated by the new method. The results show that negative effects on the performance of multivariate model caused by baseline drift of discrete spectra can be effectively eliminated by the new method.

  10. Discontinuous Finite Element Quasidiffusion Methods

    DOE PAGES

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    2018-05-21

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  11. Discontinuous Finite Element Quasidiffusion Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anistratov, Dmitriy Yurievich; Warsa, James S.

    Here in this paper, two-level methods for solving transport problems in one-dimensional slab geometry based on the quasi-diffusion (QD) method are developed. A linear discontinuous finite element method (LDFEM) is derived for the spatial discretization of the low-order QD (LOQD) equations. It involves special interface conditions at the cell edges based on the idea of QD boundary conditions (BCs). We consider different kinds of QD BCs to formulate the necessary cell-interface conditions. We develop two-level methods with independent discretization of the high-order transport equation and LOQD equations, where the transport equation is discretized using the method of characteristics and themore » LDFEM is applied to the LOQD equations. We also formulate closures that lead to the discretization consistent with a LDFEM discretization of the transport equation. The proposed methods are studied by means of test problems formulated with the method of manufactured solutions. Numerical experiments are presented demonstrating the performance of the proposed methods. Lastly, we also show that the method with independent discretization has the asymptotic diffusion limit.« less

  12. On the discretization and control of an SEIR epidemic model with a periodic impulsive vaccination

    NASA Astrophysics Data System (ADS)

    Alonso-Quesada, S.; De la Sen, M.; Ibeas, A.

    2017-01-01

    This paper deals with the discretization and control of an SEIR epidemic model. Such a model describes the transmission of an infectious disease among a time-varying host population. The model assumes mortality from causes related to the disease. Our study proposes a discretization method including a free-design parameter to be adjusted for guaranteeing the positivity of the resulting discrete-time model. Such a method provides a discrete-time model close to the continuous-time one without the need for the sampling period to be as small as other commonly used discretization methods require. This fact makes possible the design of impulsive vaccination control strategies with less burden of measurements and related computations if one uses the proposed instead of other discretization methods. The proposed discretization method and the impulsive vaccination strategy designed on the resulting discretized model are the main novelties of the paper. The paper includes (i) the analysis of the positivity of the obtained discrete-time SEIR model, (ii) the study of stability of the disease-free equilibrium point of a normalized version of such a discrete-time model and (iii) the existence and the attractivity of a globally asymptotically stable disease-free periodic solution under a periodic impulsive vaccination. Concretely, the exposed and infectious subpopulations asymptotically converge to zero as time tends to infinity while the normalized subpopulations of susceptible and recovered by immunization individuals oscillate in the context of such a solution. Finally, a numerical example illustrates the theoretic results.

  13. City curfew ordinances and teenage motor vehicle injury.

    PubMed

    Preusser, D F; Williams, A F; Lund, A K; Zador, P L

    1990-08-01

    Several U.S. cities have curfew ordinances that limit the late night activities of minor teenagers in public places including highways. Detroit, Cleveland, and Columbus, which have curfew ordinances, were compared to Cincinnati, which does not have such an ordinance. The curfew ordinances were associated with a 23% reduction in motor vehicle related injury for 13- to 17-year-olds as passengers, drivers, pedestrians, or bicyclists during the curfew hours. It was concluded that city curfew ordinances, like the statewide driving curfews studied in other states, can reduce motor vehicle injury to teenagers during the particularly hazardous late night hours.

  14. On reinitializing level set functions

    NASA Astrophysics Data System (ADS)

    Min, Chohong

    2010-04-01

    In this paper, we consider reinitializing level functions through equation ϕt+sgn(ϕ0)(‖∇ϕ‖-1)=0[16]. The method of Russo and Smereka [11] is taken in the spatial discretization of the equation. The spatial discretization is, simply speaking, the second order ENO finite difference with subcell resolution near the interface. Our main interest is on the temporal discretization of the equation. We compare the three temporal discretizations: the second order Runge-Kutta method, the forward Euler method, and a Gauss-Seidel iteration of the forward Euler method. The fact that the time in the equation is fictitious makes a hypothesis that all the temporal discretizations result in the same result in their stationary states. The fact that the absolute stability region of the forward Euler method is not wide enough to include all the eigenvalues of the linearized semi-discrete system of the second order ENO spatial discretization makes another hypothesis that the forward Euler temporal discretization should invoke numerical instability. Our results in this paper contradict both the hypotheses. The Runge-Kutta and Gauss-Seidel methods obtain the second order accuracy, and the forward Euler method converges with order between one and two. Examining all their properties, we conclude that the Gauss-Seidel method is the best among the three. Compared to the Runge-Kutta, it is twice faster and requires memory two times less with the same accuracy.

  15. Investigation into discretization methods of the six-parameter Iwan model

    NASA Astrophysics Data System (ADS)

    Li, Yikun; Hao, Zhiming; Feng, Jiaquan; Zhang, Dingguo

    2017-02-01

    Iwan model is widely applied for the purpose of describing nonlinear mechanisms of jointed structures. In this paper, parameter identification procedures of the six-parameter Iwan model based on joint experiments with different preload techniques are performed. Four kinds of discretization methods deduced from stiffness equation of the six-parameter Iwan model are provided, which can be used to discretize the integral-form Iwan model into a sum of finite Jenkins elements. In finite element simulation, the influences of discretization methods and numbers of Jenkins elements on computing accuracy are discussed. Simulation results indicate that a higher accuracy can be obtained with larger numbers of Jenkins elements. It is also shown that compared with other three kinds of discretization methods, the geometric series discretization based on stiffness provides the highest computing accuracy.

  16. Gap-minimal systems of notations and the constructible hierarchy

    NASA Technical Reports Server (NTRS)

    Lucian, M. L.

    1972-01-01

    If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.

  17. 78 FR 54670 - Miami Tribe of Oklahoma-Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-05

    ... Tribe of Oklahoma--Liquor Control Ordinance AGENCY: Bureau of Indian Affairs, Interior. ACTION: Notice. SUMMARY: This notice publishes the Miami Tribe of Oklahoma--Liquor Control Ordinance. This Ordinance... Oklahoma, increases the ability of the tribal government to control the distribution and possession of...

  18. Tax revenue in Mississippi communities following implementation of smoke-free ordinances: an examination of tourism and economic development tax revenues.

    PubMed

    McMillen, Robert; Shackelford, Signe

    2012-10-01

    There is no safe level of exposure to tobacco smoke. More than 60 Mississippi communities have passed smoke-free ordinances in the past six years. Opponents claim that these ordinances harm local businesses. Mississippi law allows municipalities to place a tourism and economic development (TED) tax on local restaurants and hotels/motels. The objective of this study is to examine the impact of these ordinances on TED tax revenues. This study applies a pre/post quasi-experimental design to compare TED tax revenue before and after implementing ordinances. Descriptive analyses indicated that inflation-adjusted tax revenues increased during the 12 months following implementation of smoke-free ordinances while there was no change in aggregated control communities. Multivariate fixed-effects analyses found no statistically significant effect of smoke-free ordinances on hospitality tax revenue. No evidence was found that smoke-free ordinances have an adverse effect on the local hospitality industry.

  19. First results of an Investigation of Sulfur Dioxide in the Ultraviolet from Pioneer Venus through Venus Express

    NASA Astrophysics Data System (ADS)

    McGouldrick, Kevin; Molaverdikhani, K.; Esposito, L. W.; Pankratz, C. K.

    2010-10-01

    The Laboratory for Atmospheric and Space Physics is carrying on a project to restore and preserve data products from several past missions for archival and use by the scientific community. This project includes the restoration of data from Mariner 6/7, Pioneer Venus, Voyager 1/2, and Galileo. Here, we present initial results of this project that involve Pioneer Venus Orbiter Ultraviolet Spectrometer (PVO UVS) data. Using the Discrete Ordinate Method for Radiative Transfer (DISORT), we generate a suite of models for the three free parameters in the upper atmosphere of Venus in which we are interested: sulfur dioxide abundance at 40mb, scale height of sulfur dioxide, and the typical radius of the upper haze particles (assumed to be composed of 84.5% sulfuric acid). We calculate best fits to our radiative transfer model results for multi-spectral images taken with PVO UVS, as well as the 'visible' channel (includes wavelengths from 290nm to about 1000nm) of the mapping mode of the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS-M-Vis) on the Venus Express spacecraft, currently orbiting Venus. This work is funded though the NASA Planetary Mission Data Analysis Program, NNH08ZDA001N.

  20. Numerical study of entropy generation due to coupled laminar and turbulent mixed convection and thermal radiation in an enclosure filled with a semitransparent medium.

    PubMed

    Goodarzi, M; Safaei, M R; Oztop, Hakan F; Karimipour, A; Sadeghinezhad, E; Dahari, M; Kazi, S N; Jomhari, N

    2014-01-01

    The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (10⁴) and turbulent flow (10⁸). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible.

  1. Numerical Study of Entropy Generation due to Coupled Laminar and Turbulent Mixed Convection and Thermal Radiation in an Enclosure Filled with a Semitransparent Medium

    PubMed Central

    Goodarzi, M.; Safaei, M. R.; Oztop, Hakan F.; Karimipour, A.; Sadeghinezhad, E.; Dahari, M.; Kazi, S. N.; Jomhari, N.

    2014-01-01

    The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (104) and turbulent flow (108). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible. PMID:24778601

  2. A comparison of gray and non-gray modeling approaches to radiative transfer in pool fire simulations.

    PubMed

    Krishnamoorthy, Gautham

    2010-10-15

    Decoupled radiative heat transfer calculations of 30 cm-diameter toluene and heptane pool fires are performed employing the discrete ordinates method. The composition and temperature fields within the fires are created from detailed experimental measurements of soot volume fractions based on absorption and emission, temperature statistics and correlations found in the literature. The measured temperature variance data is utilized to compute the temperature self-correlation term for modeling turbulence-radiation interactions. In the toluene pool fire, the presence of cold soot near the fuel surface is found to suppress the average radiation feedback to the pool surface by 27%. The performances of four gray and three non-gray radiative property models for the gases are also compared. The average variations in radiative transfer predictions due to differences in the spectroscopic and experimental databases employed in the property model formulations are found to be between 10% and 20%. Clear differences between the gray and non-gray modeling strategies are seen when the mean beam length is computed based on traditionally employed geometric relations. Therefore, a correction to the mean beam length is proposed to improve the agreement between gray and non-gray modeling in simulations of open pool fires. 2010 Elsevier B.V. All rights reserved.

  3. Extension of the Bgl Broad Group Cross Section Library

    NASA Astrophysics Data System (ADS)

    Kirilova, Desislava; Belousov, Sergey; Ilieva, Krassimira

    2009-08-01

    The broad group cross-section libraries BUGLE and BGL are applied for reactor shielding calculation using the DOORS package based on discrete ordinates method and multigroup approximation of the neutron cross-sections. BUGLE and BGL libraries are problem oriented for PWR or VVER type of reactors respectively. They had been generated by collapsing the problem independent fine group library VITAMIN-B6 applying PWR and VVER one-dimensional radial model of the reactor middle plane using the SCALE software package. The surveillance assemblies (SA) of VVER-1000/320 are located on the baffle above the reactor core upper edge in a region where geometry and materials differ from those of the middle plane and the neutron field gradient is very high which would result in a different neutron spectrum. That is why the application of the fore-mentioned libraries for the neutron fluence calculation in the region of SA could lead to an additional inaccuracy. This was the main reason to study the necessity for an extension of the BGL library with cross-sections appropriate for the SA region. Comparative analysis of the neutron spectra of the SA region calculated by the VITAMIN-B6 and BGL libraries using the two-dimensional code DORT have been done with purpose to evaluate the BGL applicability for SA calculation.

  4. 75 FR 65373 - Klamath Tribes Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Klamath Tribes Liquor Control Ordinance AGENCY... certification of the amendment to the Klamath Tribes Liquor Control Ordinance. The first Ordinance was published... and controls the sale, possession and distribution of liquor within the tribal lands. The tribal lands...

  5. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  6. Augmenting the Deliberative Method for Ranking Risks.

    PubMed

    Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel

    2016-01-01

    The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. © 2015 Society for Risk Analysis.

  7. Ordinality and the nature of symbolic numbers.

    PubMed

    Lyons, Ian M; Beilock, Sian L

    2013-10-23

    The view that representations of symbolic and nonsymbolic numbers are closely tied to one another is widespread. However, the link between symbolic and nonsymbolic numbers is almost always inferred from cardinal processing tasks. In the current work, we show that considering ordinality instead points to striking differences between symbolic and nonsymbolic numbers. Human behavioral and neural data show that ordinal processing of symbolic numbers (Are three Indo-Arabic numerals in numerical order?) is distinct from symbolic cardinal processing (Which of two numerals represents the greater quantity?) and nonsymbolic number processing (ordinal and cardinal judgments of dot-arrays). Behaviorally, distance-effects were reversed when assessing ordinality in symbolic numbers, but canonical distance-effects were observed for cardinal judgments of symbolic numbers and all nonsymbolic judgments. At the neural level, symbolic number-ordering was the only numerical task that did not show number-specific activity (greater than control) in the intraparietal sulcus. Only activity in left premotor cortex was specifically associated with symbolic number-ordering. For nonsymbolic numbers, activation in cognitive-control areas during ordinal processing and a high degree of overlap between ordinal and cardinal processing networks indicate that nonsymbolic ordinality is assessed via iterative cardinality judgments. This contrasts with a striking lack of neural overlap between ordinal and cardinal judgments anywhere in the brain for symbolic numbers, suggesting that symbolic number processing varies substantially with computational context. Ordinal processing sheds light on key differences between symbolic and nonsymbolic number processing both behaviorally and in the brain. Ordinality may prove important for understanding the power of representing numbers symbolically.

  8. Social Host Ordinances and Policies. Prevention Update

    ERIC Educational Resources Information Center

    Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2011

    2011-01-01

    Social host liability laws (also known as teen party ordinances, loud or unruly gathering ordinances, or response costs ordinances) target the location in which underage drinking takes place. Social host liability laws hold noncommercial individuals responsible for underage drinking events on property they own, lease, or otherwise control. They…

  9. 25 CFR 522.8 - Publication of class III ordinance and approval.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 522.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.8 Publication of class III ordinance and approval. The Chairman shall publish a class III tribal gaming...

  10. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... and published ordinances. 478.24 Section 478.24 Alcohol, Tobacco Products, and Firearms BUREAU OF... published ordinances. (a) The Director shall annually revise and furnish Federal firearms licensees with a compilation of State laws and published ordinances which are relevant to the enforcement of this part. The...

  11. Characteristics of Community Newspaper Coverage of Tobacco Control and Its Relationship to the Passage of Tobacco Ordinances.

    PubMed

    Eckler, Petya; Rodgers, Shelly; Everett, Kevin

    2016-10-01

    To answer the call for more systematic surveillance, analysis and evaluation of tobacco news coverage, a 6-year content analysis of newspaper stories from Missouri was conducted to evaluate the presence of public health facts and characteristics of stories framed for or against tobacco control. The method was a content analysis of all Missouri newspapers (N = 381) from September 2006 to November 2011 for a total sample of 4711. Results were connected to the larger, societal context within which newspaper stories reside, i.e., towns that passed or did not pass a smoke-free ordinance during the project intervention. Results showed the majority of news stories were about tobacco control, which were mostly written at the local level, were episodic, and carried a positive slant toward tobacco control. However, there were more negative than positive headlines, and more negative editorials than non-editorials. Tobacco control stories used fewer public health facts than non-tobacco control stories. Towns with existing smoke-free ordinances had more tobacco control stories, and towns without smoke-free ordinances had fewer tobacco control stories and more non-tobacco control stories, suggesting a connection between news media coverage and the passage of smoke-free policies. We conclude that the tobacco industry may have had success in impacting news stories in no-ordinance cities by diverting attention from tobacco control to secondary topics, such as youth smoking, which meant stories had fewer public health facts and fewer positive health benefits in towns that may have needed these details most.

  12. A Framework to Reduce Infectious Disease Risk from Urban Poultry in the United States

    PubMed Central

    Tobin, Molly R.; Goldshear, Jesse L.; Price, Lance B.; Graham, Jay P.

    2015-01-01

    Objectives Backyard poultry ownership is increasingly common in U.S. cities and is regulated at the local level. Human contact with live poultry is a well-known risk for infection with zoonotic pathogens, notably Salmonella, yet the ability of local jurisdictions to reduce the risk of infectious disease transmission from poultry to humans is unstudied. We reviewed urban poultry ordinances in the United States and reported Salmonella outbreaks from backyard poultry to identify regulatory gaps in preventing zoonotic pathogen transmission. Based on this analysis, we propose regulatory guidelines for U.S. cities to reduce infectious disease risk from backyard poultry ownership. Methods We assessed local ordinances in the 150 most populous U.S. jurisdictions for content related to noncommercial poultry ownership using online resources and communications with government officials. We also performed a literature review using publicly available data sources to identify human infectious disease outbreaks caused by contact with backyard poultry. Results Of the cities reviewed, 93% (n=139) permit poultry in some capacity. Most urban poultry ordinances share common characteristics focused on reducing nuisance to neighbors. Ordinances do not address many pathways of transmission relevant to poultry-to-human transmission of pathogens, such as manure management. Conclusions To reduce the risk of pathogen exposure from backyard poultry, urban ordinances should incorporate the following seven components: limited flock size, composting of manure in sealed containers, prohibition of slaughter, required veterinary care to sick birds, appropriate disposal of dead birds, annual permits linked to consumer education, and a registry of poultry owners. PMID:26346104

  13. The energy-dependent electron loss model: backscattering and application to heterogeneous slab media.

    PubMed

    Lee, Tae Kyu; Sandison, George A

    2003-01-21

    Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, chi, in the algorithm to be determined in advance of calculation.

  14. Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment

    NASA Astrophysics Data System (ADS)

    Barnett, D. A., Jr.

    1991-02-01

    An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.

  15. Differences between sliding semi-landmark methods in geometric morphometrics, with an application to human craniofacial and dental variation

    PubMed Central

    Ivan Perez, S; Bernal, Valeria; Gonzalez, Paula N

    2006-01-01

    Over the last decade, geometric morphometric methods have been applied increasingly to the study of human form. When too few landmarks are available, outlines can be digitized as series of discrete points. The individual points must be slid along a tangential direction so as to remove tangential variation, because contours should be homologous from subject to subject whereas their individual points need not. This variation can be removed by minimizing either bending energy (BE) or Procrustes distance (D) with respect to a mean reference form. Because these two criteria make different assumptions, it becomes necessary to study how these differences modify the results obtained. We performed bootstrapped-based Goodall's F-test, Foote's measurement, principal component (PC) and discriminant function analyses on human molars and craniometric data to compare the results obtained by the two criteria. Results show that: (1) F-scores and P-values were similar for both criteria; (2) results of Foote's measurement show that both criteria yield different estimates of within- and between-sample variation; (3) there is low correlation between the first PC axes obtained by D and BE; (4) the percentage of correct classification is similar for BE and D, but the ordination of groups along discriminant scores differs between them. The differences between criteria can alter the results when morphological variation in the sample is small, as in the analysis of modern human populations. PMID:16761977

  16. Simulation of the Microwave Emission of Multi-layered Snowpacks Using the Dense Media Radiative Transfer Theory: the DMRT-ML Model

    NASA Technical Reports Server (NTRS)

    Picard, G.; Brucker, Ludovic; Roy, A.; Dupont, F.; Fily, M.; Royer, A.; Harlow, C.

    2013-01-01

    DMRT-ML is a physically based numerical model designed to compute the thermal microwave emission of a given snowpack. Its main application is the simulation of brightness temperatures at frequencies in the range 1-200 GHz similar to those acquired routinely by spacebased microwave radiometers. The model is based on the Dense Media Radiative Transfer (DMRT) theory for the computation of the snow scattering and extinction coefficients and on the Discrete Ordinate Method (DISORT) to numerically solve the radiative transfer equation. The snowpack is modeled as a stack of multiple horizontal snow layers and an optional underlying interface representing the soil or the bottom ice. The model handles both dry and wet snow conditions. Such a general design allows the model to account for a wide range of snow conditions. Hitherto, the model has been used to simulate the thermal emission of the deep firn on ice sheets, shallow snowpacks overlying soil in Arctic and Alpine regions, and overlying ice on the large icesheet margins and glaciers. DMRT-ML has thus been validated in three very different conditions: Antarctica, Barnes Ice Cap (Canada) and Canadian tundra. It has been recently used in conjunction with inverse methods to retrieve snow grain size from remote sensing data. The model is written in Fortran90 and available to the snow remote sensing community as an open-source software. A convenient user interface is provided in Python.

  17. The energy-dependent electron loss model: backscattering and application to heterogeneous slab media

    NASA Astrophysics Data System (ADS)

    Lee, Tae Kyu; Sandison, George A.

    2003-01-01

    Electron backscattering has been incorporated into the energy-dependent electron loss (EL) model and the resulting algorithm is applied to predict dose deposition in slab heterogeneous media. This algorithm utilizes a reflection coefficient from the interface that is computed on the basis of Goudsmit-Saunderson theory and an average energy for the backscattered electrons based on Everhart's theory. Predictions of dose deposition in slab heterogeneous media are compared to the Monte Carlo based dose planning method (DPM) and a numerical discrete ordinates method (DOM). The slab media studied comprised water/Pb, water/Al, water/bone, water/bone/water, and water/lung/water, and incident electron beam energies of 10 MeV and 18 MeV. The predicted dose enhancement due to backscattering is accurate to within 3% of dose maximum even for lead as the backscattering medium. Dose discrepancies at large depths beyond the interface were as high as 5% of dose maximum and we speculate that this error may be attributed to the EL model assuming a Gaussian energy distribution for the electrons at depth. The computational cost is low compared to Monte Carlo simulations making the EL model attractive as a fast dose engine for dose optimization algorithms. The predictive power of the algorithm demonstrates that the small angle scattering restriction on the EL model can be overcome while retaining dose calculation accuracy and requiring only one free variable, χ, in the algorithm to be determined in advance of calculation.

  18. Local co-ordination and case management can enhance Indigenous eye care – a qualitative study

    PubMed Central

    2013-01-01

    Background Indigenous adults suffer six times more blindness than other Australians but 94% of this vision loss is unnecessary being preventable or treatable. We have explored the barriers and solutions to improve Indigenous eye health and proposed significant system changes required to close the gap for Indigenous eye health. This paper aims to identify the local co-ordination and case management requirements necessary to improve eye care for Indigenous Australians. Methods A qualitative study, using semi-structured interviews, focus groups, stakeholder workshops and meetings was conducted in community, private practice, hospital, non-government organisation and government settings. Data were collected at 21 sites across Australia. Semi-structured interviews were conducted with 289 people working in Indigenous health and eye care; focus group discussions with 81 community members; stakeholder workshops involving 86 individuals; and separate meetings with 75 people. 531 people participated in the consultations. Barriers and issues were identified through thematic analysis and policy solutions developed through iterative consultation. Results Poorly co-ordinated eye care services for Indigenous Australians are inefficient and costly and result in poorer outcomes for patients, communities and health care providers. Services are more effective where there is good co-ordination of services and case management of patients along the pathway of care. The establishment of clear pathways of care, development local and regional partnerships to manage services and service providers and the application of sufficient workforce with clear roles and responsibilities have the potential to achieve important improvements in eye care. Conclusions Co-ordination is a key to close the gap in eye care for Indigenous Australians. Properly co-ordinated care and support along the patient pathway through case management will save money by preventing dropout of patients who haven’t received treatment and a successfully functioning system will encourage more people to enter for care. PMID:23822115

  19. Finite element, modal co-ordinate analysis of structures subjected to moving loads

    NASA Astrophysics Data System (ADS)

    Olsson, M.

    1985-03-01

    Some of the possibilities of the finite element method in the moving load problem are demonstrated. The bridge-vehicle interaction phenomenon is considered by deriving a general bridge-vehicle element which is believed to be novel. This element may be regarded as a finite element with time-dependent and unsymmetric element matrices. The bridge response is formulated in modal co-ordinates thereby reducing the number of equations to be solved within each time step. Illustrative examples are shown for the special case of a beam bridge model and a one-axle vehicle model.

  20. Functional traits, convergent evolution, and periodic tables of niches.

    PubMed

    Winemiller, Kirk O; Fitzgerald, Daniel B; Bower, Luke M; Pianka, Eric R

    2015-08-01

    Ecology is often said to lack general theories sufficiently predictive for applications. Here, we examine the concept of a periodic table of niches and feasibility of niche classification schemes from functional trait and performance data. Niche differences and their influence on ecological patterns and processes could be revealed effectively by first performing data reduction/ordination analyses separately on matrices of trait and performance data compiled according to logical associations with five basic niche 'dimensions', or aspects: habitat, life history, trophic, defence and metabolic. Resultant patterns then are integrated to produce interpretable niche gradients, ordinations and classifications. Degree of scheme periodicity would depend on degrees of niche conservatism and convergence causing species clustering across multiple niche dimensions. We analysed a sample data set containing trait and performance data to contrast two approaches for producing niche schemes: species ordination within niche gradient space, and niche categorisation according to trait-value thresholds. Creation of niche schemes useful for advancing ecological knowledge and its applications will depend on research that produces functional trait and performance datasets directly related to niche dimensions along with criteria for data standardisation and quality. As larger databases are compiled, opportunities will emerge to explore new methods for data reduction, ordination and classification. © 2015 The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  1. Media advocacy, tobacco control policy change and teen smoking in Florida

    PubMed Central

    Niederdeppe, Jeff; Farrelly, Matthew C; Wenter, Dana

    2007-01-01

    Objective To assess whether media advocacy activities implemented by the Florida Tobacco Control Program contributed to increased news coverage, policy changes and reductions in youth smoking. Methods A content analysis of news coverage appearing in Florida newspapers between 22 April 1998 and 31 December 2001 was conducted, and patterns of coverage before and after the implementation of media advocacy efforts to promote tobacco product placement ordinances were compared. Event history analysis was used to assess whether news coverage increased the probability of enacting these ordinances in 23 of 67 Florida counties and ordinary least square (OLS) regression was used to gauge the effect of these policies on changes in youth smoking prevalence. Results The volume of programme‐related news coverage decreased after the onset of media advocacy efforts, but the ratio of coverage about Students Working Against Tobacco (the Florida Tobacco Control Program's youth advocacy organisation) relative to other topics increased. News coverage contributed to the passage of tobacco product placement ordinances in Florida counties, but these ordinances did not lead to reduced youth smoking. Conclusion This study adds to the growing literature supporting the use of media advocacy as a tool to change health‐related policies. However, results suggest caution in choosing policy goals that may or may not influence health behaviour. PMID:17297073

  2. Bringing Healthy Retail to Urban "Food Swamps": a Case Study of CBPR-Informed Policy and Neighborhood Change in San Francisco.

    PubMed

    Minkler, Meredith; Estrada, Jessica; Thayer, Ryan; Juachon, Lisa; Wakimoto, Patricia; Falbe, Jennifer

    2018-04-09

    In urban "food swamps" like San Francisco's Tenderloin, the absence of full-service grocery stores and plethora of corner stores saturated with tobacco, alcohol, and processed food contribute to high rates of chronic disease. We explore the genesis of the Tenderloin Healthy Corner Store Coalition, its relationship with health department and academic partners, and its contributions to the passage and implementation of a healthy retail ordinance through community-based participatory research (CBPR), capacity building, and advocacy. The healthy retail ordinance incentivizes small stores to increase space for healthy foods and decrease tobacco and alcohol availability. Through Yin's multi-method case study analysis, we examined the partnership's processes and contributions to the ordinance within the framework of Kingdon's three-stage policymaking model. We also assessed preliminary outcomes of the ordinance, including a 35% increase in produce sales and moderate declines in tobacco sales in the first four stores participating in the Tenderloin, as well as a "ripple effect," through which non-participating stores also improved their retail environments. Despite challenges, CBPR partnerships led by a strong community coalition concerned with bedrock issues like food justice and neighborhood inequities in tobacco exposure may represent an important avenue for health equity-focused research and its translation into practice.

  3. Using ordinal partition transition networks to analyze ECG data

    NASA Astrophysics Data System (ADS)

    Kulp, Christopher W.; Chobot, Jeremy M.; Freitas, Helena R.; Sprechini, Gene D.

    2016-07-01

    Electrocardiogram (ECG) data from patients with a variety of heart conditions are studied using ordinal pattern partition networks. The ordinal pattern partition networks are formed from the ECG time series by symbolizing the data into ordinal patterns. The ordinal patterns form the nodes of the network and edges are defined through the time ordering of the ordinal patterns in the symbolized time series. A network measure, called the mean degree, is computed from each time series-generated network. In addition, the entropy and number of non-occurring ordinal patterns (NFP) is computed for each series. The distribution of mean degrees, entropies, and NFPs for each heart condition studied is compared. A statistically significant difference between healthy patients and several groups of unhealthy patients with varying heart conditions is found for the distributions of the mean degrees, unlike for any of the distributions of the entropies or NFPs.

  4. 75 FR 51102 - Liquor Ordinance of the Wichita and Affiliated Tribes; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... Tribes; Correction AGENCY: Bureau of Indian Affairs, Interior ACTION: Notice; correction SUMMARY: The... Liquor Ordinance of the Wichita and Affiliated Tribes. The notice refers to an amended ordinance of the Wichita and Affiliated Tribes when in fact the Liquor Ordinance adopted by Resolution No. WT-10-31 on May...

  5. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    ERIC Educational Resources Information Center

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  6. Ordinal measures for iris recognition.

    PubMed

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  7. Solutions to Some Nonlinear Equations from Nonmetric Data.

    ERIC Educational Resources Information Center

    Rule, Stanley J.

    1979-01-01

    A method to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design is presented. The statistical basis for the method, called NOPE (nonmetric parameter estimation), as well as examples using artifical data, are presented. (Author/JKS)

  8. Food marketing to children through toys: response of restaurants to the first U.S. toy ordinance.

    PubMed

    Otten, Jennifer J; Hekler, Eric B; Krukowski, Rebecca A; Buman, Matthew P; Saelens, Brian E; Gardner, Christopher D; King, Abby C

    2012-01-01

    On August 9, 2010, Santa Clara County CA became the first U.S. jurisdiction to implement an ordinance that prohibits the distribution of toys and other incentives to children in conjunction with meals, foods, or beverages that do not meet minimal nutritional criteria. Restaurants had many different options for complying with this ordinance, such as introducing more healthful menu options, reformulating current menu items, or changing marketing or toy distribution practices. To assess how ordinance-affected restaurants changed their child menus, marketing, and toy distribution practices relative to non-affected restaurants. Children's menu items and child-directed marketing and toy distribution practices were examined before and at two time points after ordinance implementation (from July through November 2010) at ordinance-affected fast-food restaurants compared with demographically matched unaffected same-chain restaurants using the Children's Menu Assessment tool. Affected restaurants showed a 2.8- to 3.4-fold improvement in Children's Menu Assessment scores from pre- to post-ordinance with minimal changes at unaffected restaurants. Response to the ordinance varied by restaurant. Improvements were seen in on-site nutritional guidance; promotion of healthy meals, beverages, and side items; and toy marketing and distribution activities. The ordinance appears to have positively influenced marketing of healthful menu items and toys as well as toy distribution practices at ordinance-affected restaurants, but did not affect the number of healthful food items offered. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  9. Computational analysis of Variable Thrust Engine (VTE) performance

    NASA Technical Reports Server (NTRS)

    Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.

    1993-01-01

    The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.

  10. Radiative transfer simulations of the two-dimensional ocean glint reflectance and determination of the sea surface roughness.

    PubMed

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-02-20

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  11. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  12. Radiative Transfer Simulations of the Two-Dimensional Ocean Glint Reflectance and Determination of the Sea Surface Roughness

    NASA Technical Reports Server (NTRS)

    Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut

    2016-01-01

    An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.

  13. Some Remarks on GMRES for Transport Theory

    NASA Technical Reports Server (NTRS)

    Patton, Bruce W.; Holloway, James Paul

    2003-01-01

    We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace

  14. Los Alamos radiation transport code system on desktop computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less

  15. Assessment and validation of the community radiative transfer model for ice cloud conditions

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Weng, Fuzhong; Liu, Quanhua

    2014-11-01

    The performance of the Community Radiative Transfer Model (CRTM) under ice cloud conditions is evaluated and improved with the implementation of MODIS collection 6 ice cloud optical property model based on the use of severely roughened solid column aggregates and a modified Gamma particle size distribution. New ice cloud bulk scattering properties (namely, the extinction efficiency, single-scattering albedo, asymmetry factor, and scattering phase function) suitable for application to the CRTM are calculated by using the most up-to-date ice particle optical property library. CRTM-based simulations illustrate reasonable accuracy in comparison with the counterparts derived from a combination of the Discrete Ordinate Radiative Transfer (DISORT) model and the Line-by-line Radiative Transfer Model (LBLRTM). Furthermore, simulations of the top of the atmosphere brightness temperature with CRTM for the Crosstrack Infrared Sounder (CrIS) are carried out to further evaluate the updated CRTM ice cloud optical property look-up table.

  16. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  17. Validation of GOSAT XCO2 and XCH4 retrieved by PPDF-S method and evaluation of sensitivity of aerosols to gas concentrations

    NASA Astrophysics Data System (ADS)

    Iwasaki, C.; Imasu, R.; Bril, A.; Yokota, T.; Yoshida, Y.; Morino, I.; Oshchepkov, S.; Rokotyan, N.; Zakharov, V.; Gribanov, K.

    2017-12-01

    Photon path length probability density function-Simultaneous (PPDF-S) method is one of effective algorithms for retrieving column-averaged concentrations of carbon dioxide (XCO2) and methane (XCH4) from Greenhouse gases Observing SATellite (GOSAT) spectra in Short Wavelength InfraRed (SWIR) [Oshchepkov et al., 2013]. In this study, we validated XCO2 and XCH4 retrieved by the PPDF-S method through comparison with the Total Carbon Column Observing Network (TCCON) data [Wunch et al., 2011] from 26 sites including additional site of the Ural Atmospheric Station at Kourovka [57.038°N and 59.545°E], Russia. Validation results using TCCON data show that bias and its standard deviation of PPDF-S data are respectively 0.48 and 2.10 ppm for XCO2, and -0.73 and 15.77 ppb for XCH4. The results for XCO2 are almost identical with those of Iwasaki et al. [2017] for which the validation data were limited at selected 11 sites. However, the bias of XCH4 shows opposite sign against that of Iwasaki et al. [2017]. Furthermore, the data at Kourouvka showed different features particularly for XCH4. In order to investigate the causes for the differences, we have carried out simulation studies mainly focusing on the effects of aerosols which modify the light path length of solar radiation [O'Brien and Rayner, 2002; Aben et al., 2007; Oshchepkov et al., 2008]. Based on the simulation studies using multiple radiation transfer code based on Discrete Ordinate Method (DOM), Polarization System for Transfer of Atmospheric Radiation3 (Pstar3) [Ota et al., 2010], sensitivity of aerosols to gas concentrations was examined.

  18. 25 CFR 522.7 - Disapproval of a class III ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class III ordinance. 522.7 Section 522.7 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.7 Disapproval of a class III...

  19. 25 CFR 522.5 - Disapproval of a class II ordinance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Disapproval of a class II ordinance. 522.5 Section 522.5 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.5 Disapproval of a class II...

  20. Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis with Ordinal Data

    ERIC Educational Resources Information Center

    Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C.

    2012-01-01

    Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…

  1. Local Area Co-Ordination: Strengthening Support for People with Learning Disabilities in Scotland

    ERIC Educational Resources Information Center

    Stalker, Kirsten Ogilvie; Malloch, Margaret; Barry, Monica Anne; Watson, June Ann

    2008-01-01

    This paper reports the findings of a study commissioned by the Scottish Executive which examined the introduction and implementation of local area co-ordination (LAC) in Scotland. A questionnaire about their posts was completed by 44 local area co-ordinators, interviews were conducted with 35 local area co-ordinators and 14 managers and case…

  2. Interpreting Significant Discrete-Time Periods in Survival Analysis.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Denson, Kathleen B.

    Discrete-time survival analysis is a new method for educational researchers to employ when looking at the timing of certain educational events. Previous continuous-time methods do not allow for the flexibility inherent in a discrete-time method. Because both time-invariant and time-varying predictor variables can now be used, the interaction of…

  3. Meshfree Modeling of Munitions Penetration in Soils

    DTIC Science & Technology

    2017-04-01

    discretization ...................... 8 Figure 2. Nodal smoothing domain for the modified stabilized nonconforming nodal integration...projectile ............................................................................................... 36 Figure 17. Discretization for the...List of Acronyms DEM: discrete element methods FEM: finite element methods MSNNI: modified stabilized nonconforming nodal integration RK

  4. Nonlinear stochastic interacting dynamics and complexity of financial gasket fractal-like lattice percolation

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Jun

    2018-05-01

    A novel nonlinear stochastic interacting price dynamics is proposed and investigated by the bond percolation on Sierpinski gasket fractal-like lattice, aim to make a new approach to reproduce and study the complexity dynamics of real security markets. Fractal-like lattices correspond to finite graphs with vertices and edges, which are similar to fractals, and Sierpinski gasket is a well-known example of fractals. Fractional ordinal array entropy and fractional ordinal array complexity are introduced to analyze the complexity behaviors of financial signals. To deeper comprehend the fluctuation characteristics of the stochastic price evolution, the complexity analysis of random logarithmic returns and volatility are preformed, including power-law distribution, fractional sample entropy and fractional ordinal array complexity. For further verifying the rationality and validity of the developed stochastic price evolution, the actual security market dataset are also studied with the same statistical methods for comparison. The empirical results show that this stochastic price dynamics can reconstruct complexity behaviors of the actual security markets to some extent.

  5. Tourism and hotel revenues before and after passage of smoke-free restaurant ordinances.

    PubMed

    Glantz, S A; Charlesworth, A

    1999-05-26

    Claims that ordinances requiring smoke-free restaurants will adversely affect tourism have been used to argue against passing such ordinances. Data exist regarding the validity of these claims. To determine the changes in hotel revenues and international tourism after passage of smoke-free restaurant ordinances in locales where the effect has been debated. Comparison of hotel revenues and tourism rates before and after passage of 100% smoke-free restaurant ordinances and comparison with US hotel revenue overall. Three states (California, Utah, and Vermont) and 6 cities (Boulder, Colo; Flagstaff, Ariz; Los Angeles, Calif; Mesa, Ariz; New York, NY; and San Francisco, Calif) in which the effect on tourism of smoke-free restaurant ordinances had been debated. Hotel room revenues and hotel revenues as a fraction of total retail sales compared with preordinance revenues and overall US revenues. In constant 1997 dollars, passage of the smoke-free restaurant ordinance was associated with a statistically significant increase in the rate of change of hotel revenues in 4 localities, no significant change in 4 localities, and a significant slowing in the rate of increase (but not a decrease) in 1 locality. There was no significant change in the rate of change of hotel revenues as a fraction of total retail sales (P=.16) or total US hotel revenues associated with the ordinances when pooled across all localities (P = .93). International tourism was either unaffected or increased following implementation of the smoke-free ordinances. Smoke-free ordinances do not appear to adversely affect, and may increase, tourist business.

  6. Co-ordinated action between youth-care and sports: facilitators and barriers.

    PubMed

    Hermens, Niels; de Langen, Lisanne; Verkooijen, Kirsten T; Koelen, Maria A

    2017-07-01

    In the Netherlands, youth-care organisations and community sports clubs are collaborating to increase socially vulnerable youths' participation in sport. This is rooted in the idea that sports clubs are settings for youth development. As not much is known about co-ordinated action involving professional care organisations and community sports clubs, this study aims to generate insight into facilitators of and barriers to successful co-ordinated action between these two organisations. A cross-sectional study was conducted using in-depth semi-structured qualitative interview data. In total, 23 interviews were held at five locations where co-ordinated action between youth-care and sports takes place. Interviewees were youth-care workers, representatives from community sports clubs, and Care Sport Connectors who were assigned to encourage and manage the co-ordinated action. Using inductive coding procedures, this study shows that existing and good relationships, a boundary spanner, care workers' attitudes, knowledge and competences of the participants, organisational policies and ambitions, and some elements external to the co-ordinated action were reported to be facilitators or barriers. In addition, the participants reported that the different facilitators and barriers influenced the success of the co-ordinated action at different stages of the co-ordinated action. Future research is recommended to further explore the role of boundary spanners in co-ordinated action involving social care organisations and community sports clubs, and to identify what external elements (e.g. events, processes, national policies) are turning points in the formation, implementation and continuation of such co-ordinated action. © 2017 John Wiley & Sons Ltd.

  7. Structure, function and five basic needs of the global health research system

    PubMed Central

    Rudan, Igor; Sridhar, Devi

    2016-01-01

    Background Two major initiatives that were set up to support and co–ordinate global health research efforts have been largely discontinued in recent years: the Global Forum for Health Research and World Health Organization's Department for Research Policy and Cooperation. These developments provide an interesting case study into the factors that contribute to the sustainability of initiatives to support and co–ordinate global health research in the 21st century. Methods We reviewed the history of attempts to govern, support or co–ordinate research in global health. Moreover, we studied the changes and shifts in funding flows attributed to global health research. This allowed us to map the structure of the global health research system, as it has evolved under the increased funding contributions of the past decade. Bearing in mind its structure, core functions and dynamic nature, we proposed a framework on how to effectively support the system to increase its efficiency. Results Based on our framework, which charted the structure and function of the global health research system and exposed places and roles for many stakeholders within the system, five basic needs emerged: (i) to co–ordinate funding among donors more effectively; (ii) to prioritize among many research ideas; (iii) to quickly recognize results of successful research; (iv) to ensure broad and rapid dissemination of results and their accessibility; and (v) to evaluate return on investments in health research. Conclusion The global health research system has evolved rapidly and spontaneously. It has not been optimally efficient, but it is possible to identify solutions that could improve this. There are already examples of effective responses for the need of prioritization of research questions (eg, the CHNRI method), quick recognition of important research (eg, systems used by editors of the leading journals) and rapid and broadly accessible publication of the new knowledge (eg, PLoS One journal as an example). It is still necessary to develop tools that could assist donors to co–ordinate funding and ensure more equity between areas in the provided support, and to evaluate the value for money invested in health research. PMID:26401270

  8. Survey of local forestry-related ordinances and regulations in the south

    Treesearch

    Jonathan J. Spink; Karry L. Haney; John L. Greene

    2000-01-01

    A survey of the 13 southern states was conducted in 1999-2000 to obtain a comprehensive list of forestry-related ordinances enacted by various local governments. Each ordinance was examined to determine the date of adoption, regulatory objective, and its regu1atory provisions. Based on the regulatory objective, the ordinances were categorized into five general types:...

  9. Numerical solution of the time fractional reaction-diffusion equation with a moving boundary

    NASA Astrophysics Data System (ADS)

    Zheng, Minling; Liu, Fawang; Liu, Qingxia; Burrage, Kevin; Simpson, Matthew J.

    2017-06-01

    A fractional reaction-diffusion model with a moving boundary is presented in this paper. An efficient numerical method is constructed to solve this moving boundary problem. Our method makes use of a finite difference approximation for the temporal discretization, and spectral approximation for the spatial discretization. The stability and convergence of the method is studied, and the errors of both the semi-discrete and fully-discrete schemes are derived. Numerical examples, motivated by problems from developmental biology, show a good agreement with the theoretical analysis and illustrate the efficiency of our method.

  10. Knowledge of the ordinal position of list items in pigeons.

    PubMed

    Scarf, Damian; Colombo, Michael

    2011-10-01

    Ordinal knowledge is a fundamental aspect of advanced cognition. It is self-evident that humans represent ordinal knowledge, and over the past 20 years it has become clear that nonhuman primates share this ability. In contrast, evidence that nonprimate species represent ordinal knowledge is missing from the comparative literature. To address this issue, in the present experiment we trained pigeons on three 4-item lists and then tested them with derived lists in which, relative to the training lists, the ordinal position of the items was either maintained or changed. Similar to the findings with human and nonhuman primates, our pigeons performed markedly better on the maintained lists compared to the changed lists, and displayed errors consistent with the view that they used their knowledge of ordinal position to guide responding on the derived lists. These findings demonstrate that the ability to acquire ordinal knowledge is not unique to the primate lineage. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  11. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  12. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  13. Controlled pattern imputation for sensitivity analysis of longitudinal binary and ordinal outcomes with nonignorable dropout.

    PubMed

    Tang, Yongqiang

    2018-04-30

    The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.

  14. On analyzing ordinal data when responses and covariates are both missing at random.

    PubMed

    Rana, Subrata; Roy, Surupa; Das, Kalyan

    2016-08-01

    In many occasions, particularly in biomedical studies, data are unavailable for some responses and covariates. This leads to biased inference in the analysis when a substantial proportion of responses or a covariate or both are missing. Except a few situations, methods for missing data have earlier been considered either for missing response or for missing covariates, but comparatively little attention has been directed to account for both missing responses and missing covariates, which is partly attributable to complexity in modeling and computation. This seems to be important as the precise impact of substantial missing data depends on the association between two missing data processes as well. The real difficulty arises when the responses are ordinal by nature. We develop a joint model to take into account simultaneously the association between the ordinal response variable and covariates and also that between the missing data indicators. Such a complex model has been analyzed here by using the Markov chain Monte Carlo approach and also by the Monte Carlo relative likelihood approach. Their performance on estimating the model parameters in finite samples have been looked into. We illustrate the application of these two methods using data from an orthodontic study. Analysis of such data provides some interesting information on human habit. © The Author(s) 2013.

  15. Numerical study of the effects of lamp configuration and reactor wall roughness in an open channel water disinfection UV reactor.

    PubMed

    Sultan, Tipu

    2016-07-01

    This article describes the assessment of a numerical procedure used to determine the UV lamp configuration and surface roughness effects on an open channel water disinfection UV reactor. The performance of the open channel water disinfection UV reactor was numerically analyzed on the basis of the performance indictor reduction equivalent dose (RED). The RED values were calculated as a function of the Reynolds number to monitor the performance. The flow through the open channel UV reactor was modelled using a k-ε model with scalable wall function, a discrete ordinate (DO) model for fluence rate calculation, a volume of fluid (VOF) model to locate the unknown free surface, a discrete phase model (DPM) to track the pathogen transport, and a modified law of the wall to incorporate the reactor wall roughness effects. The performance analysis was carried out using commercial CFD software (ANSYS Fluent 15.0). Four case studies were analyzed based on open channel UV reactor type (horizontal and vertical) and lamp configuration (parallel and staggered). The results show that lamp configuration can play an important role in the performance of an open channel water disinfection UV reactor. The effects of the reactor wall roughness were Reynolds number dependent. The proposed methodology is useful for performance optimization of an open channel water disinfection UV reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Discrete choice experiments of pharmacy services: a systematic review.

    PubMed

    Vass, Caroline; Gray, Ewan; Payne, Katherine

    2016-06-01

    Background Two previous systematic reviews have summarised the application of discrete choice experiments to value preferences for pharmacy services. These reviews identified a total of twelve studies and described how discrete choice experiments have been used to value pharmacy services but did not describe or discuss the application of methods used in the design or analysis. Aims (1) To update the most recent systematic review and critically appraise current discrete choice experiments of pharmacy services in line with published reporting criteria and; (2) To provide an overview of key methodological developments in the design and analysis of discrete choice experiments. Methods The review used a comprehensive strategy to identify eligible studies (published between 1990 and 2015) by searching electronic databases for key terms related to discrete choice and best-worst scaling (BWS) experiments. All healthcare choice experiments were then hand-searched for key terms relating to pharmacy. Data were extracted using a published checklist. Results A total of 17 discrete choice experiments eliciting preferences for pharmacy services were identified for inclusion in the review. No BWS studies were identified. The studies elicited preferences from a variety of populations (pharmacists, patients, students) for a range of pharmacy services. Most studies were from a United Kingdom setting, although examples from Europe, Australia and North America were also identified. Discrete choice experiments for pharmacy services tended to include more attributes than non-pharmacy choice experiments. Few studies reported the use of qualitative research methods in the design and interpretation of the experiments (n = 9) or use of new methods of analysis to identify and quantify preference and scale heterogeneity (n = 4). No studies reported the use of Bayesian methods in their experimental design. Conclusion Incorporating more sophisticated methods in the design of pharmacy-related discrete choice experiments could help researchers produce more efficient experiments which are better suited to valuing complex pharmacy services. Pharmacy-related discrete choice experiments could also benefit from more sophisticated analytical techniques such as investigations into scale and preference heterogeneity. Employing these sophisticated methods for both design and analysis could extend the usefulness of discrete choice experiments to inform health and pharmacy policy.

  17. A developed nearly analytic discrete method for forward modeling in the frequency domain

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai

    2018-02-01

    High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.

  18. Community-level policy responses to state marijuana legalization in Washington State

    PubMed Central

    Dilley, Julia A.; Hitchcock, Laura; McGroder, Nancy; Greto, Lindsey A.; Richardson, Susan M.

    2017-01-01

    Background Washington State (WA) legalized a recreational marijuana market -- including growing, processing and retail sales -- through voter initiative 502 in November 2012. Legalized recreational marijuana retail sales began in July 2014. In response to state legalization of recreational marijuana, some cities and counties within the state have passed local ordinances that either further regulated marijuana markets, or banned them completely. The purpose of this study is to describe local-level marijuana regulations on recreational retail sales within the context of a state that had legalized a recreational marijuana market. Methods Marijuana-related ordinances were collected from all 142 cities in the state with more than 3,000 residents and from all 39 counties. Policies that were in place as of June 30, 2016 - two years after the state’s recreational market opening - to regulate recreational marijuana retail sales within communities were systematically coded. Results A total of 125 cities and 30 counties had passed local ordinances to address recreational marijuana retail sales. Multiple communities implemented retail market bans, including some temporary bans (moratoria) while studying whether to pursue other policy options. As of June 30, 2016, 30% of the state population lived in places that had temporarily or permanently banned retail sales. Communities most frequently enacted zoning policies explicitly regulating where marijuana businesses could be established. Other policies included in ordinances placed limits on business hours and distance requirements (buffers) between marijuana businesses and youth-related land use types or other sensitive areas. Conclusions State legalization does not necessarily result in uniform community environments that regulate recreational marijuana markets. Local ordinances vary among communities within Washington following statewide legalization. Further study is needed to describe how such local policies affect variation in public health and social outcomes. PMID:28365192

  19. From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.

  20. Urban Runoff: Model Ordinances for Erosion and Sediment Control

    EPA Pesticide Factsheets

    The model ordinance in this section borrows language from the erosion and sediment control ordinance features that might help prevent erosion and sedimentation and protect natural resources more fully.

  1. [Ecological relationships among artificial vegetations during their restoration in Antaibao mining area].

    PubMed

    Zhang, Guilian; Zhang, Jintun; Guo, Xiaoyu

    2005-01-01

    By the methods of TWINSPAN, DCA and DCCA, and from the aspects of the relations between plant species, communities and environmental factors, this paper studied the ecological relationships among artificial vegetations during their restoration in Antaibao mining area. 63 collected quadrates were classified into 12 community types by TWINSPAN, and the distribution of the communities could comprehensively reflect the influence of environmental factors. DCA ordination indicated that soil water content, which was increased with restoration time, was the main factor restricting the distribution of the communities. DCCA ordination showed that soil organic matter content was the decisive factor to the development of communities.

  2. Multigrid and Krylov Subspace Methods for the Discrete Stokes Equations

    NASA Technical Reports Server (NTRS)

    Elman, Howard C.

    1996-01-01

    Discretization of the Stokes equations produces a symmetric indefinite system of linear equations. For stable discretizations, a variety of numerical methods have been proposed that have rates of convergence independent of the mesh size used in the discretization. In this paper, we compare the performance of four such methods: variants of the Uzawa, preconditioned conjugate gradient, preconditioned conjugate residual, and multigrid methods, for solving several two-dimensional model problems. The results indicate that where it is applicable, multigrid with smoothing based on incomplete factorization is more efficient than the other methods, but typically by no more than a factor of two. The conjugate residual method has the advantage of being both independent of iteration parameters and widely applicable.

  3. Improved numerical methods for turbulent viscous recirculating flows

    NASA Technical Reports Server (NTRS)

    Turan, A.; Vandoormaal, J. P.

    1988-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.

  4. Quantitative Evaluation of Management Courses: Part 1

    ERIC Educational Resources Information Center

    Cunningham, Cyril

    1973-01-01

    The author describes how he developed a method of evaluating and comparing management courses of different types and lengths by applying an ordinal system of relative values using a process of transmutation. (MS)

  5. The person trade-off method and the transitivity principle: an example from preferences over age weighting.

    PubMed

    Dolan, Paul; Tsuchiya, Aki

    2003-06-01

    The person trade-off (PTO) is increasingly being used to elicit preferences in health. This paper explores the measurement properties of the PTO method in the context of a study about how members of the public prioritise between patients of different ages. In particular, it considers whether PTO responses satisfy the transitivity principle; that is, whether one PTO response can be inferred from two other PTO responses. The results suggest that very few responses to PTO questions satisfy cardinal transitivity condition. However, this study has produced results that suggest that cardinal transitivity will hold, on average, when respondents who fail to satisfy the ordinal transitivity condition have been excluded from the analysis. This suggests that future PTO studies should build in checks for ordinal transitivity. Copyright 2002 John Wiley & Sons, Ltd.

  6. Methods for the analysis of ordinal response data in medical image quality assessment.

    PubMed

    Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G

    2016-07-01

    The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.

  7. On the ordinality of numbers: A review of neural and behavioral studies.

    PubMed

    Lyons, I M; Vogel, S E; Ansari, D

    2016-01-01

    The last several years have seen steady growth in research on the cognitive and neuronal mechanisms underlying how numbers are represented as part of ordered sequences. In the present review, we synthesize what is currently known about numerical ordinality from behavioral and neuroimaging research, point out major gaps in our current knowledge, and propose several hypotheses that may bear further investigation. Evidence suggests that how we process ordinality differs from how we process cardinality, but that this difference depends strongly on context-in particular, whether numbers are presented symbolically or nonsymbolically. Results also reveal many commonalities between numerical and nonnumerical ordinal processing; however, the degree to which numerical ordinality can be reduced to domain-general mechanisms remains unclear. One proposal is that numerical ordinality relies upon more general short-term memory mechanisms as well as more numerically specific long-term memory representations. It is also evident that numerical ordinality is highly multifaceted, with symbolic representations in particular allowing for a wide range of different types of ordinal relations, the complexity of which appears to increase over development. We examine the proposal that these relations may form the basis of a richer set of associations that may prove crucial to the emergence of more complex math abilities and concepts. In sum, ordinality appears to be an important and relatively understudied facet of numerical cognition that presents substantial opportunities for new and ground-breaking research. © 2016 Elsevier B.V. All rights reserved.

  8. Compliance to two city convenience store ordinance requirements

    PubMed Central

    Menéndez, Cammie K Chaumont; Amandus, Harlan E; Wu, Nan; Hendricks, Scott A

    2015-01-01

    Background Robbery-related homicides and assaults are the leading cause of death in retail businesses. Robbery reduction approaches focus on compliance to Crime Prevention Through Environmental Design (CPTED) guidelines. Purpose We evaluated the level of compliance to CPTED guidelines specified by convenience store safety ordinances effective in 2010 in Dallas and Houston, Texas, USA. Methods Convenience stores were defined as businesses less than 10 000 square feet that sell grocery items. Store managers were interviewed for store ordinance requirements from August to November 2011, in a random sample of 594 (289 in Dallas, 305 in Houston) convenience stores that were open before and after the effective dates of their city’s ordinance. Data were collected in 2011 and analysed in 2012–2014. Results Overall, 9% of stores were in full compliance, although 79% reported being registered with the police departments as compliant. Compliance was consistently significantly higher in Dallas than in Houston for many requirements and by store type. Compliance was lower among single owner-operator stores compared with corporate/franchise stores. Compliance to individual requirements was lowest for signage and visibility. Conclusions Full compliance to the required safety measures is consistent with industry ‘best practices’ and evidence-based workplace violence prevention research findings. In Houston and Dallas compliance was higher for some CPTED requirements but not the less costly approaches that are also the more straightforward to adopt. PMID:26337569

  9. Simulating Soft Shadows with Graphics Hardware,

    DTIC Science & Technology

    1997-01-15

    This radiance texture is analogous to the mesh of radiosity values computed in a radiosity algorithm. Unlike a radiosity algorithm, however, our...discretely. Several researchers have explored continuous visibility methods for soft shadow computation and radiosity mesh generation. With this approach...times of several seconds [9]. Most radiosity methods discretize each surface into a mesh of elements and then use discrete methods such as ray

  10. Dual Formulations of Mixed Finite Element Methods with Applications

    PubMed Central

    Gillette, Andrew; Bajaj, Chandrajit

    2011-01-01

    Mixed finite element methods solve a PDE using two or more variables. The theory of Discrete Exterior Calculus explains why the degrees of freedom associated to the different variables should be stored on both primal and dual domain meshes with a discrete Hodge star used to transfer information between the meshes. We show through analysis and examples that the choice of discrete Hodge star is essential to the numerical stability of the method. Additionally, we define interpolation functions and discrete Hodge stars on dual meshes which can be used to create previously unconsidered mixed methods. Examples from magnetostatics and Darcy flow are examined in detail. PMID:21984841

  11. Spatial homogenization methods for pin-by-pin neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Kozlowski, Tomasz

    For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.

  12. Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K

    2007-07-07

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.

  13. Generalized Mantel-Haenszel Methods for Differential Item Functioning Detection

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Madeira, Jaqueline M.

    2008-01-01

    Mantel-Haenszel methods comprise a highly flexible methodology for assessing the degree of association between two categorical variables, whether they are nominal or ordinal, while controlling for other variables. The versatility of Mantel-Haenszel analytical approaches has made them very popular in the assessment of the differential functioning…

  14. Tests of Independence for Ordinal Data Using Bootstrap.

    ERIC Educational Resources Information Center

    Chan, Wai; Yung, Yiu-Fai; Bentler, Peter M.; Tang, Man-Lai

    1998-01-01

    Two bootstrap tests are proposed to test the independence hypothesis in a two-way cross table. Monte Carlo studies are used to compare the traditional asymptotic test with these bootstrap methods, and the bootstrap methods are found superior in two ways: control of Type I error and statistical power. (SLD)

  15. Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR

    ERIC Educational Resources Information Center

    Baglin, James

    2014-01-01

    Exploratory factor analysis (EFA) methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many…

  16. Adaptive Discrete Hypergraph Matching.

    PubMed

    Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao

    2018-02-01

    This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.

  17. Methods for discrete solitons in nonlinear lattices.

    PubMed

    Ablowitz, Mark J; Musslimani, Ziad H; Biondini, Gino

    2002-02-01

    A method to find discrete solitons in nonlinear lattices is introduced. Using nonlinear optical waveguide arrays as a prototype application, both stationary and traveling-wave solitons are investigated. In the limit of small wave velocity, a fully discrete perturbative analysis yields formulas for the mode shapes and velocity.

  18. Discretization vs. Rounding Error in Euler's Method

    ERIC Educational Resources Information Center

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  19. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  20. Wheat mill stream properties for discrete element method modeling

    USDA-ARS?s Scientific Manuscript database

    A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...

  1. Direct Discrete Method for Neutronic Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less

  2. The discrete adjoint method for parameter identification in multibody system dynamics.

    PubMed

    Lauß, Thomas; Oberpeilsteiner, Stefan; Steiner, Wolfgang; Nachbagauer, Karin

    2018-01-01

    The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method , where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.

  3. Impact of San Francisco's toy ordinance on restaurants and children's food purchases, 2011-2012.

    PubMed

    Otten, Jennifer J; Saelens, Brian E; Kapphahn, Kristopher I; Hekler, Eric B; Buman, Matthew P; Goldstein, Benjamin A; Krukowski, Rebecca A; O'Donohue, Laura S; Gardner, Christopher D; King, Abby C

    2014-07-17

    In 2011, San Francisco passed the first citywide ordinance to improve the nutritional standards of children's meals sold at restaurants by preventing the giving away of free toys or other incentives with meals unless nutritional criteria were met. This study examined the impact of the Healthy Food Incentives Ordinance at ordinance-affected restaurants on restaurant response (eg, toy-distribution practices, change in children's menus), and the energy and nutrient content of all orders and children's-meal-only orders purchased for children aged 0 through 12 years. Restaurant responses were examined from January 2010 through March 2012. Parent-caregiver/child dyads (n = 762) who were restaurant customers were surveyed at 2 points before and 1 seasonally matched point after ordinance enactment at Chain A and B restaurants (n = 30) in 2011 and 2012. Both restaurant chains responded to the ordinance by selling toys separately from children's meals, but neither changed their menus to meet ordinance-specified nutrition criteria. Among children for whom children's meals were purchased, significant decreases in kilocalories, sodium, and fat per order were likely due to changes in children's side dishes and beverages at Chain A. Although the changes at Chain A did not appear to be directly in response to the ordinance, the transition to a more healthful beverage and default side dish was consistent with the intent of the ordinance. Study results underscore the importance of policy wording, support the concept that more healthful defaults may be a powerful approach for improving dietary intake, and suggest that public policies may contribute to positive restaurant changes.

  4. Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations

    NASA Astrophysics Data System (ADS)

    Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide

    2017-04-01

    Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.

  5. Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard

    PubMed Central

    Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu

    2011-01-01

    Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155

  6. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  7. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  8. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  9. Numerical modelling of soot formation and oxidation in laminar coflow non-smoking and smoking ethylene diffusion flames

    NASA Astrophysics Data System (ADS)

    Liu, Fengshan; Guo, Hongsheng; Smallwood, Gregory J.; Gülder, Ömer L.

    2003-06-01

    A numerical study of soot formation and oxidation in axisymmetric laminar coflow non-smoking and smoking ethylene diffusion flames was conducted using detailed gas-phase chemistry and complex thermal and transport properties. A modified two-equation soot model was employed to describe soot nucleation, growth and oxidation. Interaction between the gas-phase chemistry and soot chemistry was taken into account. Radiation heat transfer by both soot and radiating gases was calculated using the discrete-ordinates method coupled with a statistical narrow-band correlated-k based band model, and was used to evaluate the simple optically thin approximation. The governing equations in fully elliptic form were solved. The current models in the literature describing soot oxidation by O2 and OH have to be modified in order to predict the smoking flame. The modified soot oxidation model has only moderate effects on the calculation of the non-smoking flame, but dramatically affects the soot oxidation near the flame tip in the smoking flame. Numerical results of temperature, soot volume fraction and primary soot particle size and number density were compared with experimental data in the literature. Relatively good agreement was found between the prediction and the experimental data. The optically thin approximation radiation model significantly underpredicts temperatures in the upper portion of both flames, seriously affecting the soot prediction.

  10. [The Effect of Observation Geometry on Polarized Skylight Spectrum].

    PubMed

    Zhang, Ren-bin; Wang, Ling-mei; Gao, Jun; Wang, Chi

    2015-03-01

    Study on polarized skylight spectral characters while observation geometry changing in different solar zenith angles (SZA), viewing zenith angles (VZA) or relative azimuth angles (RAA). Simulation calculation of cloudless daylight polarimetric spectrum is realized based on the solver, vector discrete ordinate method, of radiative transfer equation. In the Sun's principal and perpendicular plane, the spectral irradiance data, varying at wavelengths in the range between 0.4 and 3 μm, are calculated to extend the atmospheric polarization spectral information under the conditions: the MODTRAN solar reference spectrur is the only illuminant source; the main influencing factors of polarized radiative transfer include underlying surface albedo, aerosol layers and components, and the absorption of trace gases. Simulation analysis results: (1) While the relative azimuth angle is zero, the magnitude of spectrum U/I is lower than 10(-7) and V/I is negligible, the degree of polarization and the spectrum Q/I are shaped like the letter V or mirror-writing U. (2) In twilight, when the Sun is not in FOV of the detector, the polarization of the daytime sky has two maximum near 0.51 and 2.75 μm, and a minimum near 1.5 μm. For arbitrary observation geometry, the spectral signal of V/I may be ignored. According to observation geometry, choosing different spectral bands or polarized signal will be propitious to targets detection.

  11. A nuclear DNA perspective on delineating evolutionarily significant lineages in polyploids: the case of the endangered shortnose sturgeon (Acipenser brevirostrum)

    USGS Publications Warehouse

    King, Timothy L.; Henderson, Anne P.; Kynard, Boyd E.; Kieffer, Micah C.; Peterson, Douglas L.; Aunins, Aaron W.; Brown, Bonnie L.

    2014-01-01

    The shortnose sturgeon, Acipenser brevirostrum, oft considered a phylogenetic relic, is listed as an “endangered species threatened with extinction” in the US and “Vulnerable” on the IUCN Red List. Effective conservation of A. brevirostrum depends on understanding its diversity and evolutionary processes, yet challenges associated with the polyploid nature of its nuclear genome have heretofore limited population genetic analysis to maternally inherited haploid characters. We developed a suite of polysomic microsatellite DNA markers and characterized a sample of 561 shortnose sturgeon collected from major extant populations along the North American Atlantic coast. The 181 alleles observed at 11 loci were scored as binary loci and the data were subjected to multivariate ordination, Bayesian clustering, hierarchical partitioning of variance, and among-population distance metric tests. The methods uncovered moderately high levels of gene diversity suggesting population structuring across and within three metapopulations (Northeast, Mid-Atlantic, and Southeast) that encompass seven demographically discrete and evolutionarily distinct lineages. The predicted groups are consistent with previously described behavioral patterns, especially dispersal and migration, supporting the interpretation that A. brevirostrum exhibit adaptive differences based on watershed. Combined with results of prior genetic (mitochondrial DNA) and behavioral studies, the current work suggests that dispersal is an important factor in maintaining genetic diversity in A. brevirostrum and that the basic unit for conservation management is arguably the local population.

  12. A nuclear DNA perspective on delineating evolutionarily significant lineages in polyploids: the case of the endangered shortnose sturgeon (Acipenser brevirostrum).

    PubMed

    King, Tim L; Henderson, Anne P; Kynard, Boyd E; Kieffer, Micah C; Peterson, Douglas L; Aunins, Aaron W; Brown, Bonnie L

    2014-01-01

    The shortnose sturgeon, Acipenser brevirostrum, oft considered a phylogenetic relic, is listed as an "endangered species threatened with extinction" in the US and "Vulnerable" on the IUCN Red List. Effective conservation of A. brevirostrum depends on understanding its diversity and evolutionary processes, yet challenges associated with the polyploid nature of its nuclear genome have heretofore limited population genetic analysis to maternally inherited haploid characters. We developed a suite of polysomic microsatellite DNA markers and characterized a sample of 561 shortnose sturgeon collected from major extant populations along the North American Atlantic coast. The 181 alleles observed at 11 loci were scored as binary loci and the data were subjected to multivariate ordination, Bayesian clustering, hierarchical partitioning of variance, and among-population distance metric tests. The methods uncovered moderately high levels of gene diversity suggesting population structuring across and within three metapopulations (Northeast, Mid-Atlantic, and Southeast) that encompass seven demographically discrete and evolutionarily distinct lineages. The predicted groups are consistent with previously described behavioral patterns, especially dispersal and migration, supporting the interpretation that A. brevirostrum exhibit adaptive differences based on watershed. Combined with results of prior genetic (mitochondrial DNA) and behavioral studies, the current work suggests that dispersal is an important factor in maintaining genetic diversity in A. brevirostrum and that the basic unit for conservation management is arguably the local population.

  13. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  14. A Scenario-Based Parametric Analysis of Stable Marriage Approaches to the Army Officer Assignment Problem

    DTIC Science & Technology

    2017-03-23

    solutions obtained through their proposed method to comparative instances of a generalized assignment problem with either ordinal cost components or... method flag: Designates the method by which the changed/ new assignment problem instance is solved. methodFlag = 0:SMAWarmstart Returns a matching...of randomized perturbations. We examine the contrasts between these methods in the context of assigning Army Officers among a set of identified

  15. A survival tree method for the analysis of discrete event times in clinical and epidemiological studies.

    PubMed

    Schmid, Matthias; Küchenhoff, Helmut; Hoerauf, Achim; Tutz, Gerhard

    2016-02-28

    Survival trees are a popular alternative to parametric survival modeling when there are interactions between the predictor variables or when the aim is to stratify patients into prognostic subgroups. A limitation of classical survival tree methodology is that most algorithms for tree construction are designed for continuous outcome variables. Hence, classical methods might not be appropriate if failure time data are measured on a discrete time scale (as is often the case in longitudinal studies where data are collected, e.g., quarterly or yearly). To address this issue, we develop a method for discrete survival tree construction. The proposed technique is based on the result that the likelihood of a discrete survival model is equivalent to the likelihood of a regression model for binary outcome data. Hence, we modify tree construction methods for binary outcomes such that they result in optimized partitions for the estimation of discrete hazard functions. By applying the proposed method to data from a randomized trial in patients with filarial lymphedema, we demonstrate how discrete survival trees can be used to identify clinically relevant patient groups with similar survival behavior. Copyright © 2015 John Wiley & Sons, Ltd.

  16. An energy-stable method for solving the incompressible Navier-Stokes equations with non-slip boundary condition

    NASA Astrophysics Data System (ADS)

    Lee, Byungjoon; Min, Chohong

    2018-05-01

    We introduce a stable method for solving the incompressible Navier-Stokes equations with variable density and viscosity. Our method is stable in the sense that it does not increase the total energy of dynamics that is the sum of kinetic energy and potential energy. Instead of velocity, a new state variable is taken so that the kinetic energy is formulated by the L2 norm of the new variable. Navier-Stokes equations are rephrased with respect to the new variable, and a stable time discretization for the rephrased equations is presented. Taking into consideration the incompressibility in the Marker-And-Cell (MAC) grid, we present a modified Lax-Friedrich method that is L2 stable. Utilizing the discrete integration-by-parts in MAC grid and the modified Lax-Friedrich method, the time discretization is fully discretized. An explicit CFL condition for the stability of the full discretization is given and mathematically proved.

  17. Setting up virgin stress conditions in discrete element models.

    PubMed

    Rojek, J; Karlis, G F; Malinowski, L J; Beer, G

    2013-03-01

    In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain.

  18. Setting up virgin stress conditions in discrete element models

    PubMed Central

    Rojek, J.; Karlis, G.F.; Malinowski, L.J.; Beer, G.

    2013-01-01

    In the present work, a methodology for setting up virgin stress conditions in discrete element models is proposed. The developed algorithm is applicable to discrete or coupled discrete/continuum modeling of underground excavation employing the discrete element method (DEM). Since the DEM works with contact forces rather than stresses there is a need for the conversion of pre-excavation stresses to contact forces for the DEM model. Different possibilities of setting up virgin stress conditions in the DEM model are reviewed and critically assessed. Finally, a new method to obtain a discrete element model with contact forces equivalent to given macroscopic virgin stresses is proposed. The test examples presented show that good results may be obtained regardless of the shape of the DEM domain. PMID:27087731

  19. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  20. Testing the Two-Layer Model for Correcting Near Cloud Reflectance Enhancement Using LES SHDOM Simulated Radiances

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert

    2016-01-01

    A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.

  1. Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.

    PubMed

    Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A

    2017-03-01

    Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.

  2. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  3. Revised users manual, Pulverized Coal Gasification or Combustion: 2-dimensional (87-PCGC-2): Final report, Volume 2. [87-PCGC-2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, P.J.; Smoot, L.D.; Brewster, B.S.

    1987-12-01

    A two-dimensional, steady-state model for describing a variety of reactive and non-reactive flows, including pulverized coal combustion and gasification, is presented. Recent code revisions and additions are described. The model, referred to as 87-PCGC-2, is applicable to cylindrical axi-symmetric systems. Turbulence is accounted for in both the fluid mechanics equations and the combustion scheme. Radiation from gases, walls, and particles is taken into account using either a flux method or discrete ordinates method. The particle phase is modeled in a Lagrangian framework, such that mean paths of particle groups are followed. Several multi-step coal devolatilization schemes are included along withmore » a heterogeneous reaction scheme that allows for both diffusion and chemical reaction. Major gas-phase reactions are modeled assuming local instantaneous equilibrium, and thus the reaction rates are limited by the turbulent rate mixing. A NO/sub x/ finite rate chemistry submodel is included which integrates chemical kinetics and the statistics of the turbulence. The gas phase is described by elliptic partial differential equations that are solved by an iterative line-by-line technique. Under-relaxation is used to achieve numerical stability. The generalized nature of the model allows for calculation of isothermal fluid mechanicsgaseous combustion, droplet combustion, particulate combustion and various mixtures of the above, including combustion of coal-water and coal-oil slurries. Both combustion and gasification environments are permissible. User information and theory are presented, along with sample problems. 106 refs.« less

  4. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  5. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  6. A measure of association for ordered categorical data in population-based studies

    PubMed Central

    Nelson, Kerrie P; Edwards, Don

    2016-01-01

    Ordinal classification scales are commonly used to define a patient’s disease status in screening and diagnostic tests such as mammography. Challenges arise in agreement studies when evaluating the association between many raters’ classifications of patients’ disease or health status when an ordered categorical scale is used. In this paper, we describe a population-based approach and chance-corrected measure of association to evaluate the strength of relationship between multiple raters’ ordinal classifications where any number of raters can be accommodated. In contrast to Shrout and Fleiss’ intraclass correlation coefficient, the proposed measure of association is invariant with respect to changes in disease prevalence. We demonstrate how unique characteristics of individual raters can be explored using random effects. Simulation studies are conducted to demonstrate the properties of the proposed method under varying assumptions. The methods are applied to two large-scale agreement studies of breast cancer screening and prostate cancer severity. PMID:27184590

  7. a Numerical Method for Stability Analysis of Pinned Flexible Mechanisms

    NASA Astrophysics Data System (ADS)

    Beale, D. G.; Lee, S. W.

    1996-05-01

    A technique is presented to investigate the stability of mechanisms with pin-jointed flexible members. The method relies on a special floating frame from which elastic link co-ordinates are defined. Energies are easily developed for use in a Lagrange equation formulation, leading to a set of non-linear and mixed ordinary differential-algebraic equations of motion with constraints. Stability and bifurcation analysis is handled using a numerical procedure (generalized co-ordinate partitioning) that avoids the tedious and difficult task of analytically reducing the system of equations to a number equalling the system degrees of freedom. The proposed method was then applied to (1) a slider-crank mechanism with a flexible connecting rod and crank of constant rotational speed, and (2) a four-bar linkage with a flexible coupler with a constant speed crank. In both cases, a single pinned-pinned beam bending mode is employed to develop resonance curves and stability boundaries in the crank length-crank speed parameter plane. Flip and fold bifurcations are common occurrences in both mechanisms. The accuracy of the proposed method was also verified by comparison with previous experimental results [1].

  8. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  9. STUDYING TRAVEL-RELATED INDIVIDUAL ASSESSMENTS AND DESIRES BY COMBINING HIERARCHICALLY STRUCTURED ORDINAL VARIABLES

    PubMed Central

    Song, Tingting; Wittkowski, Knut M.

    2010-01-01

    Ordinal measures are frequently encountered in travel behavior research. This paper presents a new method for combining them when a hierarchical structure of the data can be presumed. This method is applied to study the subjective assessment of the amount of travel by different transportation modes among a group of French clerical workers, along with the desire to increase or decrease the use of such modes. Some advantages of this approach over traditional data reduction technique such as factor analysis when applied to ordinal data are then illustrated. In this study, combining evidence from several variables sheds light on the observed moderately negative relationship between the personal assessment of the amount of travel and the desire to increase or decrease it, thus integrating previous partial (univariate) results. We find a latent demand for travel, thus contributing to clarify the behavioral mechanisms behind the induced traffic phenomenon. Categorizing the above relationship by transportation mode shows a desire for a less environmental-friendly mix of modes (i.e. a greater desire to use heavy motorized modes and a lower desire to use two-wheeled modes), whenever the respondents do not feel to travel extensively. This result, combined with previous theoretical investigations concerning the determinants of the desire to alter trips consumption levels, shows the importance of making people aware of how much they travel. PMID:20953273

  10. Reduction of the discretization stencil of direct forcing immersed boundary methods on rectangular cells: The ghost node shifting method

    NASA Astrophysics Data System (ADS)

    Picot, Joris; Glockner, Stéphane

    2018-07-01

    We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.

  11. Numerical solution of the Saint-Venant equations by an efficient hybrid finite-volume/finite-difference method

    NASA Astrophysics Data System (ADS)

    Lai, Wencong; Khan, Abdul A.

    2018-04-01

    A computationally efficient hybrid finite-volume/finite-difference method is proposed for the numerical solution of Saint-Venant equations in one-dimensional open channel flows. The method adopts a mass-conservative finite volume discretization for the continuity equation and a semi-implicit finite difference discretization for the dynamic-wave momentum equation. The spatial discretization of the convective flux term in the momentum equation employs an upwind scheme and the water-surface gradient term is discretized using three different schemes. The performance of the numerical method is investigated in terms of efficiency and accuracy using various examples, including steady flow over a bump, dam-break flow over wet and dry downstream channels, wetting and drying in a parabolic bowl, and dam-break floods in laboratory physical models. Numerical solutions from the hybrid method are compared with solutions from a finite volume method along with analytic solutions or experimental measurements. Comparisons demonstrates that the hybrid method is efficient, accurate, and robust in modeling various flow scenarios, including subcritical, supercritical, and transcritical flows. In this method, the QUICK scheme for the surface slope discretization is more accurate and less diffusive than the center difference and the weighted average schemes.

  12. 75 FR 75694 - Klamath Tribes Liquor Control Ordinance Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-06

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Klamath Tribes Liquor Control Ordinance... Control Ordinance of the Klamath Tribes. This correction removes incorrect references to an amended... follows: SUMMARY: This notice publishes the Secretary's certification of the Klamath Tribes Liquor Control...

  13. Development of a computer program to obtain ordinates for NACA 4-digit, 4-digit modified, 5-digit, and 16 series airfoils

    NASA Technical Reports Server (NTRS)

    Ladson, C. L.; Brooks, Cuyler W., Jr.

    1975-01-01

    A computer program developed to calculate the ordinates and surface slopes of any thickness, symmetrical or cambered NACA airfoil of the 4-digit, 4-digit modified, 5-digit, and 16-series airfoil families is presented. The program produces plots of the airfoil nondimensional ordinates and a punch card output of ordinates in the input format of a readily available program for determining the pressure distributions of arbitrary airfoils in subsonic potential viscous flow.

  14. The added value of ordinal analysis in clinical trials: an example in traumatic brain injury.

    PubMed

    Roozenbeek, Bob; Lingsma, Hester F; Perel, Pablo; Edwards, Phil; Roberts, Ian; Murray, Gordon D; Maas, Andrew Ir; Steyerberg, Ewout W

    2011-01-01

    In clinical trials, ordinal outcome measures are often dichotomized into two categories. In traumatic brain injury (TBI) the 5-point Glasgow outcome scale (GOS) is collapsed into unfavourable versus favourable outcome. Simulation studies have shown that exploiting the ordinal nature of the GOS increases chances of detecting treatment effects. The objective of this study is to quantify the benefits of ordinal analysis in the real-life situation of a large TBI trial. We used data from the CRASH trial that investigated the efficacy of corticosteroids in TBI patients (n = 9,554). We applied two techniques for ordinal analysis: proportional odds analysis and the sliding dichotomy approach, where the GOS is dichotomized at different cut-offs according to baseline prognostic risk. These approaches were compared to dichotomous analysis. The information density in each analysis was indicated by a Wald statistic. All analyses were adjusted for baseline characteristics. Dichotomous analysis of the six-month GOS showed a non-significant treatment effect (OR = 1.09, 95% CI 0.98 to 1.21, P = 0.096). Ordinal analysis with proportional odds regression or sliding dichotomy showed highly statistically significant treatment effects (OR 1.15, 95% CI 1.06 to 1.25, P = 0.0007 and 1.19, 95% CI 1.08 to 1.30, P = 0.0002), with 2.05-fold and 2.56-fold higher information density compared to the dichotomous approach respectively. Analysis of the CRASH trial data confirmed that ordinal analysis of outcome substantially increases statistical power. We expect these results to hold for other fields of critical care medicine that use ordinal outcome measures and recommend that future trials adopt ordinal analyses. This will permit detection of smaller treatment effects.

  15. The Outlier Detection for Ordinal Data Using Scalling Technique of Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Adnan, Arisman; Sugiarto, Sigit

    2017-06-01

    The aims of this study is to detect the outliers by using coefficients of Ordinal Logistic Regression (OLR) for the case of k category responses where the score from 1 (the best) to 8 (the worst). We detect them by using the sum of moduli of the ordinal regression coefficients calculated by jackknife technique. This technique is improved by scalling the regression coefficients to their means. R language has been used on a set of ordinal data from reference distribution. Furthermore, we compare this approach by using studentised residual plots of jackknife technique for ANOVA (Analysis of Variance) and OLR. This study shows that the jackknifing technique along with the proper scaling may lead us to reveal outliers in ordinal regression reasonably well.

  16. Discrete exterior calculus discretization of incompressible Navier-Stokes equations over surface simplicial meshes

    NASA Astrophysics Data System (ADS)

    Mohamed, Mamdouh S.; Hirani, Anil N.; Samtaney, Ravi

    2016-05-01

    A conservative discretization of incompressible Navier-Stokes equations is developed based on discrete exterior calculus (DEC). A distinguishing feature of our method is the use of an algebraic discretization of the interior product operator and a combinatorial discretization of the wedge product. The governing equations are first rewritten using the exterior calculus notation, replacing vector calculus differential operators by the exterior derivative, Hodge star and wedge product operators. The discretization is then carried out by substituting with the corresponding discrete operators based on the DEC framework. Numerical experiments for flows over surfaces reveal a second order accuracy for the developed scheme when using structured-triangular meshes, and first order accuracy for otherwise unstructured meshes. By construction, the method is conservative in that both mass and vorticity are conserved up to machine precision. The relative error in kinetic energy for inviscid flow test cases converges in a second order fashion with both the mesh size and the time step.

  17. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358

  18. Application of the discrete generalized multigroup method to ultra-fine energy mesh in infinite medium calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, N. A.; Forget, B.

    2012-07-01

    The Discrete Generalized Multigroup (DGM) method uses discrete Legendre orthogonal polynomials to expand the energy dependence of the multigroup neutron transport equation. This allows a solution on a fine energy mesh to be approximated for a cost comparable to a solution on a coarse energy mesh. The DGM method is applied to an ultra-fine energy mesh (14,767 groups) to avoid using self-shielding methodologies without introducing the cost usually associated with such energy discretization. Results show DGM to converge to the reference ultra-fine solution after a small number of recondensation steps for multiple infinite medium compositions. (authors)

  19. 25 CFR 522.2 - Submission requirements.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.2 Submission requirements. A tribe... officials and key employees; (d) Copies of all tribal gaming regulations; (e) When an ordinance or...

  20. Cable Television Report and Suggested Ordinance.

    ERIC Educational Resources Information Center

    League of California Cities, Sacramento.

    Guidelines and suggested ordinances for cable television regulation by local governments are comprehensively discussed in this report. The emphasis is placed on franchising the cable operator. Seventeen legal aspects of franchising are reviewed, and an exemplary ordinance is presented. In addition, current statistics about cable franchising in…

  1. An account of co-ordination mechanisms for humanitarian assistance during the international response to the 1994 crisis in Rwanda.

    PubMed

    Borton, J

    1996-12-01

    This paper examines the co-ordination strategies developed to respond to the Great Lakes crisis following the events of April 1994. It analyses the different functions and mechanisms which sought to achieve a co-ordinated response--ranging from facilitation at one extreme to management and direction at the other. The different regimes developed to facilitate co-ordination within Rwanda and neighbouring countries, focusing on both inter-agency and inter-country co-ordination issues, are then analysed. Finally, the paper highlights the absence of mechanisms to achieve coherence between the humanitarian, political and security domains. It concludes that effective co-ordination is critical not only to achieve programme efficiency, but to ensure that the appropriate instruments and strategies to respond to complex political emergencies are in place. It proposes a radical re-shaping of international humanitarian, political and security institutions, particularly the United Nations, to improve the effectiveness of humanitarian and political responses to crises such as that in the Great Lakes.

  2. No toy for you! The healthy food incentives ordinance: paternalism or consumer protection?

    PubMed

    Etow, Alexis M

    2012-01-01

    The newest approach to discouraging children's unhealthy eating habits, amidst increasing rates of childhood obesity and other diet-related diseases, seeks to ban something that is not even edible. In 2010, San Francisco enacted the Healthy Food Incentives Ordinance, which prohibits toys in kids' meals if the meals do not meet certain nutritional requirements. Notwithstanding the Ordinance's impact on interstate commerce or potential infringement on companies' commercial speech rights and on parents' rights to determine what their children eat, this Comment argues that the Ordinance does not violate the dormant Commerce Clause, the First Amendment, or substantive due process. The irony is that although the Ordinance likely avoids the constitutional hurdles that hindered earlier measures aimed at childhood obesity, it intrudes on civil liberties more than its predecessors. This Comment analyzes the legality of the Healthy Food Incentives Ordinance to understand its implications on subsequent legislation aimed at combating childhood obesity and on the progression of public health law.

  3. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  4. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  5. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Conservative discretization of the Landau collision integral

    DOE PAGES

    Hirvijoki, E.; Adams, M. F.

    2017-03-28

    Here we describe a density, momentum-, and energy-conserving discretization of the nonlinear Landau collision integral. The method is suitable for both the finite-element and discontinuous Galerkin methods and does not require structured meshes. The conservation laws for the discretization are proven algebraically and demonstrated numerically for an axially symmetric nonlinear relaxation problem using a finite-element implementation.

  7. Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    PubMed Central

    Salomon, Joshua A

    2003-01-01

    Background In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. Methods Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. Results Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. Conclusions Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups. PMID:14687419

  8. 3D-radiative transfer in terrestrial atmosphere: An efficient parallel numerical procedure

    NASA Astrophysics Data System (ADS)

    Bass, L. P.; Germogenova, T. A.; Nikolaeva, O. V.; Kokhanovsky, A. A.; Kuznetsov, V. S.

    2003-04-01

    Light propagation and scattering in terrestrial atmosphere is usually studied in the framework of the 1D radiative transfer theory [1]. However, in reality particles (e.g., ice crystals, solid and liquid aerosols, cloud droplets) are randomly distributed in 3D space. In particular, their concentrations vary both in vertical and horizontal directions. Therefore, 3D effects influence modern cloud and aerosol retrieval procedures, which are currently based on the 1D radiative transfer theory. It should be pointed out that the standard radiative transfer equation allows to study these more complex situations as well [2]. In recent year the parallel version of the 2D and 3D RADUGA code has been developed. This version is successfully used in gammas and neutrons transport problems [3]. Applications of this code to radiative transfer in atmosphere problems are contained in [4]. Possibilities of code RADUGA are presented in [5]. The RADUGA code system is an universal solver of radiative transfer problems for complicated models, including 2D and 3D aerosol and cloud fields with arbitrary scattering anisotropy, light absorption, inhomogeneous underlying surface and topography. Both delta type and distributed light sources can be accounted for in the framework of the algorithm developed. The accurate numerical procedure is based on the new discrete ordinate SWDD scheme [6]. The algorithm is specifically designed for parallel supercomputers. The version RADUGA 5.1(P) can run on MBC1000M [7] (768 processors with 10 Gb of hard disc memory for each processor). The peak productivity is equal 1 Tfl. Corresponding scalar version RADUGA 5.1 is working on PC. As a first example of application of the algorithm developed, we have studied the shadowing effects of clouds on neighboring cloudless atmosphere, depending on the cloud optical thickness, surface albedo, and illumination conditions. This is of importance for modern satellite aerosol retrieval algorithms development. [1] Sobolev, V. V., 1972: Light scattering in planetary atmosphere, M.:Nauka. [2] Evans, K. F., 1998: The spherical harmonic discrete ordinate method for three dimensional atmospheric radiative transfer, J. Atmos. Sci., 55, 429 446. [3] L.P. Bass, T.A. Germogenova, V.S. Kuznetsov, O.V. Nikolaeva. RADUGA 5.1 and RADUGA 5.1(P) codes for stationary transport equation solution in 2D and 3D geometries on one and multiprocessors computers. Report on seminar “Algorithms and Codes for neutron physical of nuclear reactor calculations” (Neutronica 2001), Obninsk, Russia, 30 October 2 November 2001. [4] T.A. Germogenova, L.P. Bass, V.S. Kuznetsov, O.V. Nikolaeva. Mathematical modeling on parallel computers solar and laser radiation transport in 3D atmosphere. Report on International Symposium CIS countries “Atmosphere radiation”, 18 21 June 2002, St. Peterburg, Russia, p. 15 16. [5] L.P. Bass, T.A. Germogenova, O.V. Nikolaeva, V.S. Kuznetsov. Radiative Transfer Universal 2D 3D Code RADUGA 5.1(P) for Multiprocessor Computer. Abstract. Poster report on this Meeting. [6] L.P. Bass, O.V. Nikolaeva. Correct calculation of Angular Flux Distribution in Strongly Heterogeneous Media and Voids. Proc. of Joint International Conference on Mathematical Methods and Supercomputing for Nuclear Applications, Saratoga Springs, New York, October 5 9, 1997, p. 995 1004. [7] http://www/jscc.ru

  9. Using an Ordinal Outranking Method Supporting the Acquisition of Military Equipment

    DTIC Science & Technology

    2009-10-01

    will concentrate on the well-known ORESTE method ([10],[12]) which is complementary to the PROMETHEE methods. There are other methods belonging to...the PROMETHEE methods. This MCDM method is taught in the curriculum of the High Staff College for Military Administrators of the Belgian MoD...C(b,a) similar to the preference indicators ( , ) and (b,a)a b  of the PROMETHEE methods (see [4] and SAS-080 14 and SAS-080 15). These

  10. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  11. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil †

    PubMed Central

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-01-01

    An innovative array of magnetic coils (the discrete Rogowski coil—RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC’s interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors. PMID:29534006

  12. Research on the Factors Influencing the Measurement Errors of the Discrete Rogowski Coil.

    PubMed

    Xu, Mengyuan; Yan, Jing; Geng, Yingsan; Zhang, Kun; Sun, Chao

    2018-03-13

    An innovative array of magnetic coils (the discrete Rogowski coil-RC) with the advantages of flexible structure, miniaturization and mass producibility is investigated. First, the mutual inductance between the discrete RC and circular and rectangular conductors are calculated using the magnetic vector potential (MVP) method. The results are found to be consistent with those calculated using the finite element method, but the MVP method is simpler and more practical. Then, the influence of conductor section parameters, inclination, and eccentricity on the accuracy of the discrete RC is calculated to provide a reference. Studying the influence of an external current on the discrete RC's interference error reveals optimal values for length, winding density, and position arrangement of the solenoids. It has also found that eccentricity and interference errors decreasing with increasing number of solenoids. Finally, a discrete RC prototype is devised and manufactured. The experimental results show consistent output characteristics, with the calculated sensitivity and mutual inductance of the discrete RC being very close to the experimental results. The influence of an external conductor on the measurement of the discrete RC is analyzed experimentally, and the results show that interference from an external current decreases with increasing distance between the external and measured conductors.

  13. 77 FR 34981 - Stillaguamish Tribe of Indians-Liquor Control Ordinance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Stillaguamish Tribe of Indians--Liquor Control... publishes the Stillaguamish Tribe of Indians' Liquor Control Ordinance. The Ordinance regulates and controls... of the Stillaguamish Tribe of Indians, will increase the ability of the tribal government to control...

  14. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  15. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  16. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  17. 7 CFR 1901.204 - Compliance reviews.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Administrator, Community and Business Programs, for each recipient. (4) Mandatory hook-up ordinance. Compliance... under the provisions of a mandatory hook-up ordinance will consist of a certification by the borrower or grantee that the ordinance is still in effect and is being enforced. (5) Forwarding noncompliance report...

  18. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  19. Segmentation of discrete vector fields.

    PubMed

    Li, Hongyu; Chen, Wenbin; Shen, I-Fan

    2006-01-01

    In this paper, we propose an approach for 2D discrete vector field segmentation based on the Green function and normalized cut. The method is inspired by discrete Hodge Decomposition such that a discrete vector field can be broken down into three simpler components, namely, curl-free, divergence-free, and harmonic components. We show that the Green Function Method (GFM) can be used to approximate the curl-free and the divergence-free components to achieve our goal of the vector field segmentation. The final segmentation curves that represent the boundaries of the influence region of singularities are obtained from the optimal vector field segmentations. These curves are composed of piecewise smooth contours or streamlines. Our method is applicable to both linear and nonlinear discrete vector fields. Experiments show that the segmentations obtained using our approach essentially agree with human perceptual judgement.

  20. Local discretization method for overdamped Brownian motion on a potential with multiple deep wells.

    PubMed

    Nguyen, P T T; Challis, K J; Jack, M W

    2016-11-01

    We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.

  1. Local discretization method for overdamped Brownian motion on a potential with multiple deep wells

    NASA Astrophysics Data System (ADS)

    Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.

    2016-11-01

    We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.

  2. A fast radiative transfer method for the simulation of visible satellite imagery

    NASA Astrophysics Data System (ADS)

    Scheck, Leonhard; Frèrebeau, Pascal; Buras-Schnell, Robert; Mayer, Bernhard

    2016-05-01

    A computationally efficient radiative transfer method for the simulation of visible satellite images is presented. The top of atmosphere reflectance is approximated by a function depending on vertically integrated optical depths and effective particle sizes for water and ice clouds, the surface albedo, the sun and satellite zenith angles and the scattering angle. A look-up table (LUT) for this reflectance function is generated by means of the discrete ordinate method (DISORT). For a constant scattering angle the reflectance is a relatively smooth and symmetric function of the two zenith angles, which can be well approximated by the lowest-order terms of a 2D Fourier series. By storing only the lowest Fourier coefficients and adopting a non-equidistant grid for the scattering angle, the LUT is reduced to a size of 21 MB per satellite channel. The computation of the top of atmosphere reflectance requires only the calculation of the cloud parameters from the model state and the evaluation and interpolation of the reflectance function using the compressed LUT and is thus orders of magnitude faster than DISORT. The accuracy of the method is tested by generating synthetic satellite images for the 0.6 μm and 0.8 μm channels of the SEVIRI instrument for operational COSMO-DE model forecasts from the German Weather Service (DWD) and comparing them to DISORT results. For a test period in June the root mean squared absolute reflectance error is about 10-2 and the mean relative reflectance error is less than 2% for both channels. For scattering angles larger than 170 ° the rapid variation of reflectance with the particle size related to the backscatter glory reduces the accuracy and the errors increase by a factor of 3-4. Speed and accuracy of the new method are sufficient for operational data assimilation and high-resolution model verification applications.

  3. 3D modeling of satellite spectral images, radiation budget and energy budget of urban landscapes

    NASA Astrophysics Data System (ADS)

    Gastellu-Etchegorry, J. P.

    2008-12-01

    DART EB is a model that is being developed for simulating the 3D (3 dimensional) energy budget of urban and natural scenes, possibly with topography and atmosphere. It simulates all non radiative energy mechanisms (heat conduction, turbulent momentum and heat fluxes, water reservoir evolution, etc.). It uses DART model (Discrete Anisotropic Radiative Transfer) for simulating radiative mechanisms: 3D radiative budget of 3D scenes and their remote sensing images expressed in terms of reflectance or brightness temperature values, for any atmosphere, wavelength, sun/view direction, altitude and spatial resolution. It uses an innovative multispectral approach (ray tracing, exact kernel, discrete ordinate techniques) over the whole optical domain. This paper presents two major and recent improvements of DART for adapting it to urban canopies. (1) Simulation of the geometry and optical characteristics of urban elements (houses, etc.). (2) Modeling of thermal infrared emission by vegetation and urban elements. The new DART version was used in the context of the CAPITOUL project. For that, districts of the Toulouse urban data base (Autocad format) were translated into DART scenes. This allowed us to simulate visible, near infrared and thermal infrared satellite images of Toulouse districts. Moreover, the 3D radiation budget was used by DARTEB for simulating the time evolution of a number of geophysical quantities of various surface elements (roads, walls, roofs). Results were successfully compared with ground measurements of the CAPITOUL project.

  4. Safety belt usage before and after enactment of a mandatory usage ordinance (Lexington-Fayette County, Kentucky)

    DOT National Transportation Integrated Search

    1990-10-01

    In the absence of a statewide law, a local ordinance was passed by the Lexington-Fayette Urban County Government mandating use of safety belts. The objective of this study was to conduct surveys before the ordinance was passed, during the implementat...

  5. Cardination and Ordination Learning in Young Children.

    ERIC Educational Resources Information Center

    Stock, William; Flora, June

    This paper analyzes Brainerd's work in assessing the developmental sequence or ordination and cardination concepts of number, and describes a study which investigated the hypothesis that task-specific difficulty could explain Brainers's data. Three new tasks were designed for the assessment of ordination and cardination and administered to a…

  6. 25 CFR 522.6 - Approval requirements for class III ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Section 522.6 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.6 Approval...) The tribe shall have the sole proprietary interest in and responsibility for the conduct of any gaming...

  7. 36 CFR 28.15 - Approval of local zoning ordinances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Approval of local zoning ordinances. 28.15 Section 28.15 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and Approval of Local Ordinances...

  8. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  9. 36 CFR 28.15 - Approval of local zoning ordinances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Approval of local zoning ordinances. 28.15 Section 28.15 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR FIRE ISLAND NATIONAL SEASHORE: ZONING STANDARDS Federal Standards and Approval of Local Ordinances...

  10. 25 CFR 900.136 - Do tribal employment rights ordinances apply to construction contracts and subcontracts?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Do tribal employment rights ordinances apply to... OF THE INTERIOR, AND INDIAN HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES CONTRACTS UNDER... rights ordinances apply to construction contracts and subcontracts? Yes. Tribal employment rights...

  11. The Duluth Clean Indoor Air Ordinance: Problems and Success in Fighting the Tobacco Industry at the Local Level in the 21st Century

    PubMed Central

    Tsoukalas, Theodore; Glantz, Stanton A.

    2003-01-01

    Case study methodology was used to investigate the tobacco industry’s strategies to fight local tobacco control efforts in Duluth, Minn. The industry opposed the clean indoor air ordinance indirectly through allies and front groups and directly in a referendum. Health groups failed to win a strong ordinance because they framed it as a youth issue rather than a workplace issue and failed to engage the industry’s economic claims. Opponents’ overexploitation of weaknesses in the ordinance allowed health advocates to construct a stronger version. Health advocates should assume that the tobacco industry will oppose all local tobacco control measures indirectly, directly, or both. Clean indoor air ordinances should be framed as workplace safety issues. PMID:12893598

  12. Environmental Gradient Analysis, Ordination, and Classification in Environmental Impact Assessments.

    DTIC Science & Technology

    1987-09-01

    agglomerative clustering algorithms for mainframe computers: (1) the unweighted pair-group method that V uses arithmetic averages ( UPGMA ), (2) the...hierarchical agglomerative unweighted pair-group method using arithmetic averages ( UPGMA ), which is also called average linkage clustering. This method was...dendrograms produced by weighted clustering (93). Sneath and Sokal (94), Romesburg (84), and Seber• (90) also strongly recommend the UPGMA . A dendrogram

  13. Mutual Information between Discrete Variables with Many Categories using Recursive Adaptive Partitioning

    PubMed Central

    Seok, Junhee; Seon Kang, Yeong

    2015-01-01

    Mutual information, a general measure of the relatedness between two random variables, has been actively used in the analysis of biomedical data. The mutual information between two discrete variables is conventionally calculated by their joint probabilities estimated from the frequency of observed samples in each combination of variable categories. However, this conventional approach is no longer efficient for discrete variables with many categories, which can be easily found in large-scale biomedical data such as diagnosis codes, drug compounds, and genotypes. Here, we propose a method to provide stable estimations for the mutual information between discrete variables with many categories. Simulation studies showed that the proposed method reduced the estimation errors by 45 folds and improved the correlation coefficients with true values by 99 folds, compared with the conventional calculation of mutual information. The proposed method was also demonstrated through a case study for diagnostic data in electronic health records. This method is expected to be useful in the analysis of various biomedical data with discrete variables. PMID:26046461

  14. GEMPIC: geometric electromagnetic particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Kraus, Michael; Kormann, Katharina; Morrison, Philip J.; Sonnendrücker, Eric

    2017-08-01

    We present a novel framework for finite element particle-in-cell methods based on the discretization of the underlying Hamiltonian structure of the Vlasov-Maxwell system. We derive a semi-discrete Poisson bracket, which retains the defining properties of a bracket, anti-symmetry and the Jacobi identity, as well as conservation of its Casimir invariants, implying that the semi-discrete system is still a Hamiltonian system. In order to obtain a fully discrete Poisson integrator, the semi-discrete bracket is used in conjunction with Hamiltonian splitting methods for integration in time. Techniques from finite element exterior calculus ensure conservation of the divergence of the magnetic field and Gauss' law as well as stability of the field solver. The resulting methods are gauge invariant, feature exact charge conservation and show excellent long-time energy and momentum behaviour. Due to the generality of our framework, these conservation properties are guaranteed independently of a particular choice of the finite element basis, as long as the corresponding finite element spaces satisfy certain compatibility conditions.

  15. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  16. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromagnetic properties of the model are symmetric with respect ...

  17. A DEIM Induced CUR Factorization

    DTIC Science & Technology

    2015-09-18

    CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given

  18. Using Discrete Choice Experiments to Inform the Benefit-Risk Assessment of Medicines: Are We Ready Yet?

    PubMed

    Vass, Caroline M; Payne, Katherine

    2017-09-01

    There is emerging interest in the use of discrete choice experiments as a means of quantifying the perceived balance between benefits and risks (quantitative benefit-risk assessment) of new healthcare interventions, such as medicines, under assessment by regulatory agencies. For stated preference data on benefit-risk assessment to be used in regulatory decision making, the methods to generate these data must be valid, reliable and capable of producing meaningful estimates understood by decision makers. Some reporting guidelines exist for discrete choice experiments, and for related methods such as conjoint analysis. However, existing guidelines focus on reporting standards, are general in focus and do not consider the requirements for using discrete choice experiments specifically for quantifying benefit-risk assessments in the context of regulatory decision making. This opinion piece outlines the current state of play in using discrete choice experiments for benefit-risk assessment and proposes key areas needing to be addressed to demonstrate that discrete choice experiments are an appropriate and valid stated preference elicitation method in this context. Methodological research is required to establish: how robust the results of discrete choice experiments are to formats and methods of risk communication; how information in the discrete choice experiment can be presented effectually to respondents; whose preferences should be elicited; the correct underlying utility function and analytical model; the impact of heterogeneity in preferences; and the generalisability of the results. We believe these methodological issues should be addressed, alongside developing a 'reference case', before agencies can safely and confidently use discrete choice experiments for quantitative benefit-risk assessment in the context of regulatory decision making for new medicines and healthcare products.

  19. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  20. ON THE ROLE OF INVOLUTIONS IN THE DISCONTINUOUS GALERKIN DISCRETIZATION OF MAXWELL AND MAGNETOHYDRODYNAMIC SYSTEMS

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2005-01-01

    The role of involutions in energy stability of the discontinuous Galerkin (DG) discretization of Maxwell and magnetohydrodynamic (MHD) systems is examined. Important differences are identified in the symmetrization of the Maxwell and MHD systems that impact the construction of energy stable discretizations using the DG method. Specifically, general sufficient conditions to be imposed on the DG numerical flux and approximation space are given so that energy stability is retained These sufficient conditions reveal the favorable energy consequence of imposing continuity in the normal component of the magnetic induction field at interelement boundaries for MHD discretizations. Counterintuitively, this condition is not required for stability of Maxwell discretizations using the discontinuous Galerkin method.

  1. CFD Modeling of Flow, Temperature, and Concentration Fields in a Pilot-Scale Rotary Hearth Furnace

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Su, Fu-Yong; Wen, Zhi; Li, Zhi; Yong, Hai-Quan; Feng, Xiao-Hong

    2014-01-01

    A three-dimensional mathematical model for simulation of flow, temperature, and concentration fields in a pilot-scale rotary hearth furnace (RHF) has been developed using a commercial computational fluid dynamics software, FLUENT. The layer of composite pellets under the hearth is assumed to be a porous media layer with CO source and energy sink calculated by an independent mathematical model. User-defined functions are developed and linked to FLUENT to process the reduction process of the layer of composite pellets. The standard k-ɛ turbulence model in combination with standard wall functions is used for modeling of gas flow. Turbulence-chemistry interaction is taken into account through the eddy-dissipation model. The discrete ordinates model is used for modeling of radiative heat transfer. A comparison is made between the predictions of the present model and the data from a test of the pilot-scale RHF, and a reasonable agreement is found. Finally, flow field, temperature, and CO concentration fields in the furnace are investigated by the model.

  2. A parametric simulation of solar chimney power plant

    NASA Astrophysics Data System (ADS)

    Beng Hooi, Lim; Kannan Thangavelu, Saravana

    2018-01-01

    The strong solar radiation, continuous supplies of sunlight and environmental friendly factors have made the solar chimney power plant becoming highly feasible to build in Malaysia. Solar chimney power plant produces upward buoyancy force through the greenhouse effect. Numerical simulation was performed on the model of a solar chimney power plant using the ANSYS Fluent software by applying standard k-epsilon turbulence model and discrete ordinates (DO) radiation model to solve the relevant equations. A parametric study was carried out to evaluate the performance of solar chimney power plant, which focused on the temperature rise in the collector, air velocity at the chimney base, and pressure drop inside the chimney were based on the results of temperature, velocity, and static pressure distributions. The results demonstrate reliability by comparing a model with the experimental data of Manzanares Spanish prototype. Based on the numerical results, power capacity and efficiency were analysed theoretically. Results indicate that a stronger solar radiation and larger prototype will improve the performance of solar chimney power plant.

  3. Blanket activation and afterheat for the Compact Reversed-Field Pinch Reactor

    NASA Astrophysics Data System (ADS)

    Davidson, J. W.; Battat, M. E.

    A detailed assessment has been made of the activation and afterheat for a Compact Reversed-Field Pinch Reactor (CRFPR) blanket using a two-dimensional model that included the limiter, the vacuum ducts, and the manifolds and headers for cooling the limiter and the first and second walls. Region-averaged, multigroup fluxes and prompt gamma-ray/neutron heating rates were calculated using the two-dimensional, discrete-ordinates code TRISM. Activation and depletion calculations were performed with the code FORIG using one-group cross sections generated with the TRISM region-averaged fluxes. Afterheat calculations were performed for regions near the plasma, i.e., the limiter, first wall, etc. assuming a 10-day irradiation. Decay heats were computed for decay periods up to 100 minutes. For the activation calculations, the irradiation period was taken to be one year and blanket activity inventories were computed for decay times to 4 x 10 years. These activities were also calculated as the toxicity-weighted biological hazard potential (BHP).

  4. CAN'T MISS--conquer any number task by making important statistics simple. Part 1. Types of variables, mean, median, variance, and standard deviation.

    PubMed

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.

  5. MCNP capabilities for nuclear well logging calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Little, R.C.; Briesmeister, J.F.

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. This paper discusses how the general-purpose continuous-energy Monte Carlo code MCNP ({und M}onte {und C}arlo {und n}eutron {und p}hoton), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tallymore » characteristics with standard MCNP features. The time-dependent capability of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data.« less

  6. Verification of ARES transport code system with TAKEDA benchmarks

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue

    2015-10-01

    Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.

  7. Integrated, Step-Wise, Mass-Isotopomeric Flux Analysis of the TCA Cycle.

    PubMed

    Alves, Tiago C; Pongratz, Rebecca L; Zhao, Xiaojian; Yarborough, Orlando; Sereda, Sam; Shirihai, Orian; Cline, Gary W; Mason, Graeme; Kibbey, Richard G

    2015-11-03

    Mass isotopomer multi-ordinate spectral analysis (MIMOSA) is a step-wise flux analysis platform to measure discrete glycolytic and mitochondrial metabolic rates. Importantly, direct citrate synthesis rates were obtained by deconvolving the mass spectra generated from [U-(13)C6]-D-glucose labeling for position-specific enrichments of mitochondrial acetyl-CoA, oxaloacetate, and citrate. Comprehensive steady-state and dynamic analyses of key metabolic rates (pyruvate dehydrogenase, β-oxidation, pyruvate carboxylase, isocitrate dehydrogenase, and PEP/pyruvate cycling) were calculated from the position-specific transfer of (13)C from sequential precursors to their products. Important limitations of previous techniques were identified. In INS-1 cells, citrate synthase rates correlated with both insulin secretion and oxygen consumption. Pyruvate carboxylase rates were substantially lower than previously reported but showed the highest fold change in response to glucose stimulation. In conclusion, MIMOSA measures key metabolic rates from the precursor/product position-specific transfer of (13)C-label between metabolites and has broad applicability to any glucose-oxidizing cell. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feder, Russell; Youssef, Mahamoud; Klabacha, Jonathan

    USITER is one of seven partner domestic agencies (DA) contributing components to the ITER project. Four diagnostic port plug packages (two equatorial ports and two upper ports) will be engineered and fabricated by Princeton Plasma Physics Lab (PPPL). Diagnostic port plugs as illustrated in Fig. 1 are large primarily stainless steel structures that serve several roles on ITER. The port plugs are the primary vacuum seal and tritium confinement barriers for the vessel. The port plugs also house several plasma diagnostic systems and other machine service equipment. Finally, each port plug must shield high energy neutrons and gamma photons frommore » escaping and creating radiological problems in maintenance areas behind the port plugs. The optimization of the balance between adequate shielding and the need for high performance, high throughput diagnostics systems is the focus of this paper. Neutronics calculations are also needed for assessing nuclear heating and nuclear damage in the port plug and diagnostic components. Attila, the commercially available discrete-ordinates software package, is used for all diagnostic port plug neutronics analysis studies at PPPL.« less

  9. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  10. Appropriate Statistical Analysis for Two Independent Groups of Likert-Type Data

    ERIC Educational Resources Information Center

    Warachan, Boonyasit

    2011-01-01

    The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…

  11. A validation study of a rapid field-based rating system for discriminating among flow permanence classes of headwater streams in South Carolina

    EPA Science Inventory

    Rapid field-based protocols for classifying flow permanence of headwater streams are needed to inform timely regulatory decisions. Such an existing method was developed for and has been used in North Carolina since 1997. The method uses ordinal scoring of 26 geomorphology, hydr...

  12. A Cross-Sectional Comparison of Perceived Quality of Primary Care by Hypertensive Patients in Shanghai and Shenzhen, China

    PubMed Central

    Li, Haitao; Wei, Xiaolin; Wong, Martin Chi-Sang; Wong, Samuel Yeung-Shan; Yang, Nan; Griffiths, Sian M.

    2015-01-01

    Abstract Hypertension should be best managed under primary care settings. This study aimed to compare, between Shanghai and Shenzhen, the perceived quality of primary care in terms of accessibility, continuity, co-ordination, and comprehensiveness among hypertensive patients. A cross-sectional study was conducted in Shanghai and Shenzhen, China. Multistage random sampling method was used to select 8 community health centers. Data from primary care users were collected through on-site face-to-face interviews using the primary care assessment tool. Good quality standard was set as a value of 3 for each attribute and a value of 18 for total score. We included 568 patients in Shanghai and 128 patients in Shenzhen. Compared with those in Shenzhen, hypertensive patients in Shanghai reported a higher score in co-ordination of information (3.37 vs 3.66; P < 0.001), but lower scores in continuity of care (3.36 vs 3.27; P < 0.001), and comprehensiveness-service provision (3.26 vs 2.79; P < 0.001). There was no statistically significant difference in total scores between the 2 cities (18.19 vs 18.15). Over 3-quarters of hypertensive patients in both cities reported accessibility (97.2% vs 91.4%) and co-ordination of services (76.1% vs 80.5%) under good quality standard, while <1-quarter of them rated continuity of care (23.6% vs 22.7%), co-ordination of information (4.8% vs 21.1%), and comprehensiveness-service availability (15.1% vs 25.0%) under that standard. Compared with Shenzhen, the perceived quality of primary care for hypertensive patients in Shanghai was better in terms of co-ordination of information, but poorer on continuity of care and comprehensiveness-service provision. Our study suggests that there is room for quality improvement in both cities. PMID:26313780

  13. ADAM: analysis of discrete models of biological systems using computer algebra.

    PubMed

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.

  14. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  15. A FINITE-DIFFERENCE, DISCRETE-WAVENUMBER METHOD FOR CALCULATING RADAR TRACES

    EPA Science Inventory

    A hybrid of the finite-difference method and the discrete-wavenumber method is developed to calculate radar traces. The method is based on a three-dimensional model defined in the Cartesian coordinate system; the electromag-netic properties of the model are symmetric with respect...

  16. Environmental diversity as a surrogate for species representation.

    PubMed

    Beier, Paul; de Albuquerque, Fábio Suzart

    2015-10-01

    Because many species have not been described and most species ranges have not been mapped, conservation planners often use surrogates for conservation planning, but evidence for surrogate effectiveness is weak. Surrogates are well-mapped features such as soil types, landforms, occurrences of an easily observed taxon (discrete surrogates), and well-mapped environmental conditions (continuous surrogate). In the context of reserve selection, the idea is that a set of sites selected to span diversity in the surrogate will efficiently represent most species. Environmental diversity (ED) is a rarely used surrogate that selects sites to efficiently span multivariate ordination space. Because it selects across continuous environmental space, ED should perform better than discrete surrogates (which necessarily ignore within-bin and between-bin heterogeneity). Despite this theoretical advantage, ED appears to have performed poorly in previous tests of its ability to identify 50 × 50 km cells that represented vertebrates in Western Europe. Using an improved implementation of ED, we retested ED on Western European birds, mammals, reptiles, amphibians, and combined terrestrial vertebrates. We also tested ED on data sets for plants of Zimbabwe, birds of Spain, and birds of Arizona (United States). Sites selected using ED represented European mammals no better than randomly selected cells, but they represented species in the other 7 data sets with 20% to 84% effectiveness. This far exceeds the performance in previous tests of ED, and exceeds the performance of most discrete surrogates. We believe ED performed poorly in previous tests because those tests considered only a few candidate explanatory variables and used suboptimal forms of ED's selection algorithm. We suggest future work on ED focus on analyses at finer grain sizes more relevant to conservation decisions, explore the effect of selecting the explanatory variables most associated with species turnover, and investigate whether nonclimate abiotic variables can provide useful surrogates in an ED framework. © 2015 Society for Conservation Biology.

  17. Numerical and Non-Numerical Ordinality Processing in Children with and without Developmental Dyscalculia: Evidence from fMRI

    ERIC Educational Resources Information Center

    Kaufmann, L.; Vogel, S. E.; Starke, M.; Kremser, C.; Schocke, M.

    2009-01-01

    Ordinality is--beyond numerical magnitude (i.e., quantity)--an important characteristic of the number system. There is converging empirical evidence that (intra)parietal brain regions mediate number magnitude processing. Furthermore, recent findings suggest that the human intraparietal sulcus (IPS) supports magnitude and ordinality in a…

  18. 25 CFR 522.1 - Scope of this part.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR APPROVAL OF CLASS II AND CLASS III ORDINANCES AND RESOLUTIONS SUBMISSION OF GAMING ORDINANCE OR RESOLUTION § 522.1 Scope of this part. This part applies to any gaming ordinance or resolution adopted by a tribe after February 22, 1993. Part 523 of this chapter...

  19. Land and Liberty: The Ordinances of the 1780s.

    ERIC Educational Resources Information Center

    Sheehan, Bernard W.

    The U.S. Constitution established the broad legal frame for the U.S. political order; the ordinances provided the indispensable means for the expansion of that order across the continent. The first effort at organizing the northwest occurred in 1784. Written by Thomas Jefferson, the Ordinance of 1784 defined the stages through which territories…

  20. Educational Legislation in Colonial Zimbabwe (1899-1979)

    ERIC Educational Resources Information Center

    Richards, Kimberly; Govere, Ephraim

    2003-01-01

    This article focuses on a historical series of education acts that impacted on education in Rhodesia. These Acts are the: (1) 1899 Education Ordinance; (2) 1903 Education Ordinance; (3) 1907 Education Ordinance; (4) 1929 Department of Native Development Act; (5) 1930 Compulsory Education Act; (6) 1959 African Education Act; (7) 1973 Education Act;…

  1. On pseudo-spectral time discretizations in summation-by-parts form

    NASA Astrophysics Data System (ADS)

    Ruggiu, Andrea A.; Nordström, Jan

    2018-05-01

    Fully-implicit discrete formulations in summation-by-parts form for initial-boundary value problems must be invertible in order to provide well functioning procedures. We prove that, under mild assumptions, pseudo-spectral collocation methods for the time derivative lead to invertible discrete systems when energy-stable spatial discretizations are used.

  2. Comparison of a discrete steepest ascent method with the continuous steepest ascent method for optimal programing

    NASA Technical Reports Server (NTRS)

    Childs, A. G.

    1971-01-01

    A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.

  3. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-07-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways and calculating marginal STR distributions. We found that the Y-STR haplotypes from e.g. Finland were relatively homogeneous as opposed to the relatively heterogeneous Y-STR haplotypes from e.g. Lublin, Eastern Poland and Berlin, Germany. We demonstrated that the observed distributions of alleles at each locus were similar to the expected ones. We also compared pairwise distances between geographically separated samples from Africa with those obtained using the AMOVA method and found good agreement. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. The new immigration contestation: social movements and local immigration policy making in the United States, 2000-2011.

    PubMed

    Steil, Justin Peter; Vasi, Ion Bogdan

    2014-01-01

    Analyzing oppositional social movements in the context of municipal immigration ordinances, the authors examine whether the explanatory power of resource mobilization, political process, and strain theories of social movements' impact on policy outcomes differs when considering proactive as opposed to reactive movements. The adoption of pro-immigrant (proactive) ordinances was facilitated by the presence of immigrant community organizations and of sympathetic local political allies. The adoption of anti-immigrant (reactive) ordinances was influenced by structural social changes, such as rapid increases in the local Latino population, that were framed as threats. The study also finds that pro-immigrant protest events can influence policy in two ways, contributing both to the passage of pro-immigrant ordinances in the locality where protests occur and also inhibiting the passage of anti-immigrant ordinances in neighboring cities.

  5. A fast numerical method for the valuation of American lookback put options

    NASA Astrophysics Data System (ADS)

    Song, Haiming; Zhang, Qi; Zhang, Ran

    2015-10-01

    A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.

  6. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  7. Conservative, unconditionally stable discretization methods for Hamiltonian equations, applied to wave motion in lattice equations modeling protein molecules

    NASA Astrophysics Data System (ADS)

    LeMesurier, Brenton

    2012-01-01

    A new approach is described for generating exactly energy-momentum conserving time discretizations for a wide class of Hamiltonian systems of DEs with quadratic momenta, including mechanical systems with central forces; it is well-suited in particular to the large systems that arise in both spatial discretizations of nonlinear wave equations and lattice equations such as the Davydov System modeling energetic pulse propagation in protein molecules. The method is unconditionally stable, making it well-suited to equations of broadly “Discrete NLS form”, including many arising in nonlinear optics. Key features of the resulting discretizations are exact conservation of both the Hamiltonian and quadratic conserved quantities related to continuous linear symmetries, preservation of time reversal symmetry, unconditional stability, and respecting the linearity of certain terms. The last feature allows a simple, efficient iterative solution of the resulting nonlinear algebraic systems that retain unconditional stability, avoiding the need for full Newton-type solvers. One distinction from earlier work on conservative discretizations is a new and more straightforward nearly canonical procedure for constructing the discretizations, based on a “discrete gradient calculus with product rule” that mimics the essential properties of partial derivatives. This numerical method is then used to study the Davydov system, revealing that previously conjectured continuum limit approximations by NLS do not hold, but that sech-like pulses related to NLS solitons can nevertheless sometimes arise.

  8. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  9. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  10. Flow analysis for efficient design of wavy structured microchannel mixing devices

    NASA Astrophysics Data System (ADS)

    Kanchan, Mithun; Maniyeri, Ranjith

    2018-04-01

    Microfluidics is a rapidly growing field of applied research which is strongly driven by demands of bio-technology and medical innovation. Lab-on-chip (LOC) is one such application which deals with integrating bio-laboratory on micro-channel based single fluidic chip. Since fluid flow in such devices is restricted to laminar regime, designing an efficient passive modulator to induce chaotic mixing for such diffusion based flow is a major challenge. In the present work two-dimensional numerical simulation of viscous incompressible flow is carried out using immersed boundary method (IBM) to obtain an efficient design for wavy structured micro-channel mixing devices. The continuity and Navier-Stokes equations governing the flow are solved by fractional step based finite volume method on a staggered Cartesian grid system. IBM uses Eulerian co-ordinates to describe fluid flow and Lagrangian co-ordinates to describe solid boundary. Dirac delta function is used to couple both these co-ordinate variables. A tether forcing term is used to impose the no-slip boundary condition on the wavy structure and fluid interface. Fluid flow analysis by varying Reynolds number is carried out for four wavy structure models and one straight line model. By analyzing fluid accumulation zones and flow velocities, it can be concluded that straight line structure performs better mixing for low Reynolds number and Model 2 for higher Reynolds number. Thus wavy structures can be incorporated in micro-channels to improve mixing efficiency.

  11. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings.

    PubMed

    Nelson, Kerrie P; Mitani, Aya A; Edwards, Don

    2017-09-10

    Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Reliability of Total Test Scores When Considered as Ordinal Measurements

    ERIC Educational Resources Information Center

    Biswas, Ajoy Kumar

    2006-01-01

    This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…

  13. Economic Analysis of a Living Wage Ordinance.

    ERIC Educational Resources Information Center

    Tolley, George; Bernstein, Peter

    A study estimated the costs of the "Chicago Jobs and Living Wage Ordinance" that would require firms that receive assistance from the city of Chicago to pay their workers an hourly wage of at least $7.60. An estimate of the additional labor cost that would result from the proposed Ordinance was calculated. Results of a survey of…

  14. Dyslexia and Developmental Co-Ordination Disorder in Further and Higher Education--Similarities and Differences. Does the "Label" Influence the Support Given?

    ERIC Educational Resources Information Center

    Kirby, Amanda; Sugden, David; Beveridge, Sally; Edwards, Lisa; Edwards, Rachel

    2008-01-01

    Developmental co-ordination disorder (DCD) is a developmental disorder affecting motor co-ordination. The "Diagnostics Statistics Manual"--IV classification for DCD describes difficulties across a range of activities of daily living, impacting on everyday skills and academic performance in school. Recent evidence has shown that…

  15. The Development and Standardization of the Adult Developmental Co-Ordination Disorders/Dyspraxia Checklist (ADC)

    ERIC Educational Resources Information Center

    Kirby, Amanda; Edwards, Lisa; Sugden, David; Rosenblum, Sara

    2010-01-01

    Developmental Co-ordination Disorder (DCD), also known as Dyspraxia in the United Kingdom (U.K.), is a developmental disorder affecting motor co-ordination. In the past this was regarded as a childhood disorder, however there is increasing evidence that a significant number of children will continue to have persistent difficulties into adulthood.…

  16. How to Plan an Ordinance: An Outline and Some Examples.

    ERIC Educational Resources Information Center

    Cable Television Information Center, Washington, DC.

    Designed for public officials who must make policy decisions concerning cable television, this booklet forms a checklist to ensure that all basic questions have been considered in drafting an ordinance. The purpose of a cable television ordinance is to develop a law listing the specifications and obligations that will govern the franchising of a…

  17. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 1 2014-04-01 2014-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  18. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 1 2013-04-01 2013-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  19. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 1 2012-04-01 2011-04-01 true How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  20. 25 CFR 11.108 - How are tribal ordinances affected by this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false How are tribal ordinances affected by this part? 11.108 Section 11.108 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Application; Jurisdiction § 11.108 How are tribal ordinances affected by...

  1. An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale

    ERIC Educational Resources Information Center

    Granberg-Rademacker, J. Scott

    2010-01-01

    The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…

  2. Evaluation of new techniques for the calculation of internal recirculating flows

    NASA Technical Reports Server (NTRS)

    Van Doormaal, J. P.; Turan, A.; Raithby, G. D.

    1987-01-01

    The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.

  3. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  4. Associations Between County and Municipality Zoning Ordinances and Access to Fruit And Vegetable Outlets in Rural North Carolina, 2012

    PubMed Central

    Mayo, Mariel Leah; Chriqui, Jamie F.

    2013-01-01

    Introduction Zoning ordinances and land-use plans may influence the community food environment by determining placement and access to food outlets, which subsequently support or hinder residents’ attempts to eat healthfully. The objective of this study was to examine associations between healthful food zoning scores as derived from information on local zoning ordinances, county demographics, and residents’ access to fruit and vegetable outlets in rural northeastern North Carolina. Methods From November 2012 through March 2013, county and municipality zoning ordinances were identified and double-coded by using the Bridging the Gap food code/policy audit form. A healthful food zoning score was derived by assigning points for the allowed use of fruit and vegetable outlets. Pearson coefficients were calculated to examine correlations between the healthful food zoning score, county demographics, and the number of fruit and vegetable outlets. In March and April 2013, qualitative interviews were conducted among county and municipal staff members knowledgeable about local zoning and planning to ascertain implementation and enforcement of zoning to support fruit and vegetable outlets. Results We found a strong positive correlation between healthful food zoning scores and the number of fruit and vegetable outlets in 13 northeastern North Carolina counties (r = 0.66, P = .01). Major themes in implementation and enforcement of zoning to support fruit and vegetable outlets included strict enforcement versus lack of enforcement of zoning regulations. Conclusion Increasing the range of permitted uses in zoning districts to include fruit and vegetable outlets may increase access to healthful fruit and vegetable outlets in rural communities. PMID:24309091

  5. Starfish (Asteroidea, Echinodermata) from the Faroe Islands; spatial distribution and abundance

    NASA Astrophysics Data System (ADS)

    Ringvold, H.; Andersen, T.

    2016-01-01

    "Marine benthic fauna of the Faroe Islands" (BIOFAR) is a large programme with a focus on collecting invertebrate fauna from the Faroes (62°N and 7°W). Cruises were undertaken from 1987 to 1990, and starfish (Asteroidea, Echinodermata) collected during this time were analysed. Asteroidea were sampled at ~50% of all BIOFAR stations. A Detritus sledge and a Triangular dredge proved to be the most efficient equipment, collecting over 60% of the specimens. In total 2473 specimens were collected from 20 to 1500 m depth, including 41 species from 17 families and 31 genera. Henricia pertusa (O. F. Müller, 1776) group, Pontaster tenuispinus (Düben & Koren, 1846), and Leptychaster arcticus (M. Sars, 1851) showed highest relative abundance. Maximum species diversity was found at 500-700 m depth, which coincides with the transition zone of water masses (North Icelandic Winter Water and Arctic Intermediate Water (NI/AI)) at approximately 400-600 m depth. 63% of the species were recorded at an average-weighted depth above 600 m. Two different ordination methods (detrended correspondence analysis (DCA) and nonmetric multidimensional scaling (NMDS)) gave highly consistent representations of the community structure gradients. The first ordination axis scores did not show significant relationships with any environmental variable. Biological covariates like the presence of Lophelia corals were not significantly related to ordination scores on any axis. The second ordination axis scores were significantly correlated with depth. Temperature and salinity were highly correlated (r=0.90), and both negatively correlated with depth (r=-0.69 and r=-0.57, respectively).

  6. Building code compliance and enforcement: The experience of San Francisco's residential energy conservation ordinance and California's building standards for new construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vine, E.

    1990-11-01

    As part of Lawrence Berkeley Laboratory's (LBL) technical assistance to the Sustainable City Project, compliance and enforcement activities related to local and state building codes for existing and new construction were evaluated in two case studies. The analysis of the City of San Francisco's Residential Energy Conservation Ordinance (RECO) showed that a limited, prescriptive energy conservation ordinance for existing residential construction can be enforced relatively easily with little administrative costs, and that compliance with such ordinances can be quite high. Compliance with the code was facilitated by extensive publicity, an informed public concerned with the cost of energy and knowledgeablemore » about energy efficiency, the threat of punishment (Order of Abatement), the use of private inspectors, and training workshops for City and private inspectors. The analysis of California's Title 24 Standards for new residential and commercial construction showed that enforcement of this type of code for many climate zones is more complex and requires extensive administrative support for education and training of inspectors, architects, engineers, and builders. Under this code, prescriptive and performance approaches for compliance are permitted, resulting in the demand for alternative methods of enforcement: technical assistance, plan review, field inspection, and computer analysis. In contrast to existing to construction, building design and new materials and construction practices are of critical importance in new construction, creating a need for extensive technical assistance and extensive interaction between enforcement personnel and the building community. Compliance problems associated with building design and installation did occur in both residential and nonresidential buildings. 12 refs., 5 tabs.« less

  7. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    PubMed

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  8. Protein normal-mode dynamics: trypsin inhibitor, crambin, ribonuclease and lysozyme.

    PubMed

    Levitt, M; Sander, C; Stern, P S

    1985-02-05

    We have developed a new method for modelling protein dynamics using normal-mode analysis in internal co-ordinates. This method, normal-mode dynamics, is particularly well suited for modelling collective motion, makes possible direct visualization of biologically interesting modes, and is complementary to the more time-consuming simulation of molecular dynamics trajectories. The essential assumption and limitation of normal-mode analysis is that the molecular potential energy varies quadratically. Our study starts with energy minimization of the X-ray co-ordinates with respect to the single-bond torsion angles. The main technical task is the calculation of second derivative matrices of kinetic and potential energy with respect to the torsion angle co-ordinates. These enter into a generalized eigenvalue problem, and the final eigenvalues and eigenvectors provide a complete description of the motion in the basic 0.1 to 10 picosecond range. Thermodynamic averages of amplitudes, fluctuations and correlations can be calculated efficiently using analytical formulae. The general method presented here is applied to four proteins, trypsin inhibitor, crambin, ribonuclease and lysozyme. When the resulting atomic motion is visualized by computer graphics, it is clear that the motion of each protein is collective with all atoms participating in each mode. The slow modes, with frequencies of below 10 cm-1 (a period of 3 ps), are the most interesting in that the motion in these modes is segmental. The root-mean-square atomic fluctuations, which are dominated by a few slow modes, agree well with experimental temperature factors (B values). The normal-mode dynamics of these four proteins have many features in common, although in the larger molecules, lysozyme and ribonuclease, there is low frequency domain motion about the active site.

  9. Nonconforming mortar element methods: Application to spectral discretizations

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  10. Case management for high-intensity service users: towards a relational approach to care co-ordination.

    PubMed

    McEvoy, Phil; Escott, Diane; Bee, Penny

    2011-01-01

    This study is based on a formative evaluation of a case management service for high-intensity service users in Northern England. The evaluation had three main purposes: (i) to assess the quality of the organisational infrastructure; (ii) to obtain a better understanding of the key influences that played a role in shaping the development of the service; and (iii) to identify potential changes in practice that may help to improve the quality of service provision. The evaluation was informed by Gittell's relational co-ordination theory, which focuses upon cross-boundary working practices that facilitate task integration. The Assessment of Chronic Illness Care Survey was used to assess the organisational infrastructure and qualitative interviews with front line staff were conducted to explore the key influences that shaped the development of the service. A high level of strategic commitment and political support for integrated working was identified. However, the quality of care co-ordination was variable. The most prominent operational factor that appeared to influence the scope and quality of care co-ordination was the pattern of interaction between the case managers and their co-workers. The co-ordination of patient care was much more effective in integrated co-ordination networks. Key features included clearly defined, task focussed, relational workspaces with interactive forums where case managers could engage with co-workers in discussions about the management of interdependent care activities. In dispersed co-ordination networks with fewer relational workspaces, the case managers struggled to work as effectively. The evaluation concluded that the creation of flexible and efficient task focused relational workspaces that are systemically managed and adequately resourced could help to improve the quality of care co-ordination, particularly in dispersed networks. © 2010 Blackwell Publishing Ltd.

  11. Projected health impact of the Los Angeles City living wage ordinance

    PubMed Central

    Cole, B.; Shimkhada, R.; Morgenstern, H.; Kominski, G.; Fielding, J.; Wu, S.

    2005-01-01

    Study objective: To estimate the relative health effects of the income and health insurance provisions of the Los Angeles City living wage ordinance. Setting and participants: About 10 000 employees of city contractors are subject to the Los Angeles City living wage ordinance, which establishes an annually adjusted minimum wage ($7.99 per hour in July 2002) and requires employers to contribute $1.25 per hour worked towards employees' health insurance, or, if health insurance is not provided, to add this amount to wages. Design: As part of a comprehensive health impact assessment (HIA), we used estimates of the effects of health insurance and income on mortality from the published literature to construct a model to estimate and compare potential reductions in mortality attributable to the increases in wage and changes in health insurance status among workers covered by the Los Angeles City living wage ordinance. Results: The model predicts that the ordinance currently reduces mortality by 1.4 deaths per year per 10 000 workers at a cost of $27.5 million per death prevented. If the ordinance were modified so that all uninsured workers received health insurance, mortality would be reduced by eight deaths per year per 10 000 workers at a cost of $3.4 million per death prevented. Conclusions: The health insurance provisions of the ordinance have the potential to benefit the health of covered workers far more cost effectively than the wage provisions of the ordinance. This analytical model can be adapted and used in other health impact assessments of related policy actions that might affect either income or access to health insurance in the affected population. PMID:16020640

  12. Projected health impact of the Los Angeles City living wage ordinance.

    PubMed

    Cole, Brian L; Shimkhada, Riti; Morgenstern, Hal; Kominski, Gerald; Fielding, Jonathan E; Wu, Sheng

    2005-08-01

    To estimate the relative health effects of the income and health insurance provisions of the Los Angeles City living wage ordinance. About 10 000 employees of city contractors are subject to the Los Angeles City living wage ordinance, which establishes an annually adjusted minimum wage (7.99 US dollars per hour in July 2002) and requires employers to contribute 1.25 US dollars per hour worked towards employees' health insurance, or, if health insurance is not provided, to add this amount to wages. As part of a comprehensive health impact assessment (HIA), we used estimates of the effects of health insurance and income on mortality from the published literature to construct a model to estimate and compare potential reductions in mortality attributable to the increases in wage and changes in health insurance status among workers covered by the Los Angeles City living wage ordinance. The model predicts that the ordinance currently reduces mortality by 1.4 deaths per year per 10,000 workers at a cost of 27.5 million US dollars per death prevented. If the ordinance were modified so that all uninsured workers received health insurance, mortality would be reduced by eight deaths per year per 10,000 workers at a cost of 3.4 million US dollars per death prevented. The health insurance provisions of the ordinance have the potential to benefit the health of covered workers far more cost effectively than the wage provisions of the ordinance. This analytical model can be adapted and used in other health impact assessments of related policy actions that might affect either income or access to health insurance in the affected population.

  13. Two modified symplectic partitioned Runge-Kutta methods for solving the elastic wave equation

    NASA Astrophysics Data System (ADS)

    Su, Bo; Tuo, Xianguo; Xu, Ling

    2017-08-01

    Based on a modified strategy, two modified symplectic partitioned Runge-Kutta (PRK) methods are proposed for the temporal discretization of the elastic wave equation. The two symplectic schemes are similar in form but are different in nature. After the spatial discretization of the elastic wave equation, the ordinary Hamiltonian formulation for the elastic wave equation is presented. The PRK scheme is then applied for time integration. An additional term associated with spatial discretization is inserted into the different stages of the PRK scheme. Theoretical analyses are conducted to evaluate the numerical dispersion and stability of the two novel PRK methods. A finite difference method is used to approximate the spatial derivatives since the two schemes are independent of the spatial discretization technique used. The numerical solutions computed by the two new schemes are compared with those computed by a conventional symplectic PRK. The numerical results, which verify the new method, are superior to those generated by traditional conventional methods in seismic wave modeling.

  14. Overview of Existing Wind Energy Ordinances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oteri, F.

    2008-12-01

    Due to increased energy demand in the United States, rural communities with limited or no experience with wind energy now have the opportunity to become involved in this industry. Communities with good wind resources may be approached by entities with plans to develop the resource. Although these opportunities can create new revenue in the form of construction jobs and land lease payments, they also create a new responsibility on the part of local governments to ensure that ordinances will be established to aid the development of safe facilities that will be embraced by the community. The purpose of this reportmore » is to educate and engage state and local governments, as well as policymakers, about existing large wind energy ordinances. These groups will have a collection of examples to utilize when they attempt to draft a new large wind energy ordinance in a town or county without existing ordinances.« less

  15. On the Total Variation of High-Order Semi-Discrete Central Schemes for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Levy, Doron

    2004-01-01

    We discuss a new fifth-order, semi-discrete, central-upwind scheme for solving one-dimensional systems of conservation laws. This scheme combines a fifth-order WENO reconstruction, a semi-discrete central-upwind numerical flux, and a strong stability preserving Runge-Kutta method. We test our method with various examples, and give particular attention to the evolution of the total variation of the approximations.

  16. An asymptotic preserving unified gas kinetic scheme for frequency-dependent radiative transfer equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Wenjun, E-mail: sun_wenjun@iapcm.ac.cn; Jiang, Song, E-mail: jiang@iapcm.ac.cn; Xu, Kun, E-mail: makxu@ust.hk

    This paper presents an extension of previous work (Sun et al., 2015 [22]) of the unified gas kinetic scheme (UGKS) for the gray radiative transfer equations to the frequency-dependent (multi-group) radiative transfer system. Different from the gray radiative transfer equations, where the optical opacity is only a function of local material temperature, the simulation of frequency-dependent radiative transfer is associated with additional difficulties from the frequency-dependent opacity. For the multiple frequency radiation, the opacity depends on both the spatial location and the frequency. For example, the opacity is typically a decreasing function of frequency. At the same spatial region themore » transport physics can be optically thick for the low frequency photons, and optically thin for high frequency ones. Therefore, the optical thickness is not a simple function of space location. In this paper, the UGKS for frequency-dependent radiative system is developed. The UGKS is a finite volume method and the transport physics is modeled according to the ratio of the cell size to the photon's frequency-dependent mean free path. When the cell size is much larger than the photon's mean free path, a diffusion solution for such a frequency radiation will be obtained. On the other hand, when the cell size is much smaller than the photon's mean free path, a free transport mechanism will be recovered. In the regime between the above two limits, with the variation of the ratio between the local cell size and photon's mean free path, the UGKS provides a smooth transition in the physical and frequency space to capture the corresponding transport physics accurately. The seemingly straightforward extension of the UGKS from the gray to multiple frequency radiation system is due to its intrinsic consistent multiple scale transport modeling, but it still involves lots of work to properly discretize the multiple groups in order to design an asymptotic preserving (AP) scheme in all regimes. The current scheme is tested in a few frequency-dependent radiation problems, and the results are compared with the solutions from the well-defined implicit Monte Carlo (IMC) method. The UGKS is much more efficient than IMC, and the computational times of both schemes for all test cases are listed. The UGKS seems to be the first discrete ordinate method (DOM) for the accurate capturing of multiple frequency radiative transport physics from ballistic particle motion to the diffusive wave propagation.« less

  17. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    PubMed

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. New preconditioning strategy for Jacobian-free solvers for variably saturated flows with Richards’ equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil

    2016-04-29

    We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less

  19. A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon

    2017-09-01

    Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.

  20. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

Top