Science.gov

Sample records for adaptive refinement procedure

  1. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  2. An adaptive refinement procedure for transient thermal analysis using nodeless variable finite elements

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Wieting, Allan R.; Thornton, Earl A.

    1990-01-01

    An adaptive mesh refinement procedure that uses nodeless variables and quadratic interpolation functions is presented for analyzing transient thermal problems. A temperature based finite element scheme with Crank-Nicolson time marching is used to obtain the thermal solution. The strategies used for mesh adaption, computing refinement indicators, and time marching are described. Examples in one and two dimensions are presented and comparisons are made with exact solutions. The effectiveness of this procedure for transient thermal analysis is reflected in good solution accuracy, reduction in number of elements used, and computational efficiency.

  3. Issues in adaptive mesh refinement

    SciTech Connect

    Dai, William Wenlong

    2009-01-01

    In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.

  4. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  5. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  6. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  7. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  8. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  9. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    SciTech Connect

    Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).

  10. Adaptive mesh refinement for storm surge

    NASA Astrophysics Data System (ADS)

    Mandli, Kyle T.; Dawson, Clint N.

    2014-03-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GEOCLAW framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.

  11. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less

  12. Parallel object-oriented adaptive mesh refinement

    SciTech Connect

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  13. Adaptive Hybrid Mesh Refinement for Multiphysics Applications

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F

    2007-01-01

    The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to maximize the equidistribution of a weighted geometric and/or solution function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate forecasting. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.

  14. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  15. Adaptive refinement tools for tetrahedral unstructured grids

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)

    2011-01-01

    An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.

  16. Visualization of adaptive mesh refinement data

    NASA Astrophysics Data System (ADS)

    Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Joy, Kenneth I.; Ligocki, Terry J.; Ma, Kwan-Liu; Shalf, John M.

    2001-05-01

    The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.

  17. Adaptive mesh refinement techniques for electrical impedance tomography.

    PubMed

    Molinari, M; Cox, S J; Blott, B H; Daniell, G J

    2001-02-01

    Adaptive mesh refinement techniques can be applied to increase the efficiency of electrical impedance tomography reconstruction algorithms by reducing computational and storage cost as well as providing problem-dependent solution structures. A self-adaptive refinement algorithm based on an a posteriori error estimate has been developed and its results are shown in comparison with uniform mesh refinement for a simple head model.

  18. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  19. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  20. Efficiency considerations in triangular adaptive mesh refinement.

    PubMed

    Behrens, Jörn; Bader, Michael

    2009-11-28

    Locally or adaptively refined meshes have been successfully applied to simulation applications involving multi-scale phenomena in the geosciences. In particular, for situations with complex geometries or domain boundaries, meshes with triangular or tetrahedral cells demonstrate their superior ability to accurately represent relevant realistic features. On the other hand, these methods require more complex data structures and are therefore less easily implemented, maintained and optimized. Acceptance in the Earth-system modelling community is still low. One of the major drawbacks is posed by indirect addressing due to unstructured or dynamically changing data structures and correspondingly lower efficiency of the related computations. In this paper, we will derive several strategies to circumvent the mentioned efficiency constraint. In particular, we will apply recent computational sciences methods in combination with results of classical mathematics (space-filling curves) in order to linearize the complex data and access structure.

  1. COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO

    SciTech Connect

    Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai

    2010-02-01

    In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.

  2. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  3. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  4. Visualization Tools for Adaptive Mesh Refinement Data

    SciTech Connect

    Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-05-09

    Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.

  5. Adaptive Mesh Refinement Simulations of Relativistic Binaries

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Anderson, M.; Lehner, L.; Olabarrieta, I.; Tohline, J. E.; Liebling, S. L.; Rahman, T.; Hirschman, E.; Neilsen, D.

    2006-09-01

    We present recent results from our efforts to evolve relativistic binaries composed of compact objects. We simultaneously solve the general relativistic hydrodynamics equations to evolve the material components of the binary and Einstein's equations to evolve the space-time. These two codes are coupled through an adaptive mesh refinement driver (had). One of the ultimate goals of this project is to address the merger of a neutron star and black hole and assess the possible observational signature of such systems as gamma ray bursts. This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311 and in part through NASA's ATP program grant NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043 and at LSU's Center for Computation & Technology.

  6. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  7. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  8. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  9. A multiblock/multilevel mesh refinement procedure for CFD computations

    NASA Astrophysics Data System (ADS)

    Teigland, Rune; Eliassen, Inge K.

    2001-07-01

    A multiblock/multilevel algorithm with local refinement for general two- and three-dimensional fluid flow is presented. The patched-based local refinement procedure is presented in detail and algorithmic implementations are also presented. The multiblock implementation is essentially block-unstructured, i.e. each block having its own local curvilinear co-ordinate system. Refined grid patches can be put anywhere in the computational domain and can extend across block boundaries. To simplify the implementation, while still maintaining sufficient generality, the refinement is restricted to a refinement of the grid successively halving the grid size within a selected patch. The multiblock approach is implemented within the framework of the well-known SIMPLE solution strategy. Computational experiments showing the effect of using the multilevel solution procedure are presented for a sample elliptic problem and a few benchmark problems of computational fluid dynamics (CFD). Copyright

  10. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  11. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  12. Structured Adaptive Mesh Refinement Application Infrastructure

    SciTech Connect

    2010-07-15

    SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.

  13. Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.

    2014-09-01

    SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.

  14. Divergence-Free Adaptive Mesh Refinement for Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2001-12-01

    Several physical systems, such as nonrelativistic and relativistic magnetohydrodynamics (MHD), radiation MHD, electromagnetics, and incompressible hydrodynamics, satisfy Stoke's law type equations for the divergence-free evolution of vector fields. In this paper we present a full-fledged scheme for the second-order accurate, divergence-free evolution of vector fields on an adaptive mesh refinement (AMR) hierarchy. We focus here on adaptive mesh MHD. However, the scheme has applicability to the other systems of equations mentioned above. The scheme is based on making a significant advance in the divergence-free reconstruction of vector fields. In that sense, it complements the earlier work of D. S. Balsara and D. S. Spicer (1999, J. Comput. Phys. 7, 270) where we discussed the divergence-free time-update of vector fields which satisfy Stoke's law type evolution equations. Our advance in divergence-free reconstruction of vector fields is such that it reduces to the total variation diminishing (TVD) property for one-dimensional evolution and yet goes beyond it in multiple dimensions. For that reason, it is extremely suitable for the construction of higher order Godunov schemes for MHD. Both the two-dimensional and three-dimensional reconstruction strategies are developed. A slight extension of the divergence-free reconstruction procedure yields a divergence-free prolongation strategy for prolonging magnetic fields on AMR hierarchies. Divergence-free restriction is also discussed. Because our work is based on an integral formulation, divergence-free restriction and prolongation can be carried out on AMR meshes with any integral refinement ratio, though we specialize the expressions for the most popular situation where the refinement ratio is two. Furthermore, we pay attention to the fact that in order to efficiently evolve the MHD equations on AMR hierarchies, the refined meshes must evolve in time with time steps that are a fraction of their parent mesh's time step

  15. Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries

    SciTech Connect

    Phillip, B.

    2000-07-24

    Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.

  16. Protein structure refinement with adaptively restrained homologous replicas.

    PubMed

    Della Corte, Dennis; Wildberg, André; Schröder, Gunnar F

    2016-09-01

    A novel protein refinement protocol is presented which utilizes molecular dynamics (MD) simulations of an ensemble of adaptively restrained homologous replicas. This approach adds evolutionary information to the force field and reduces random conformational fluctuations by coupling of several replicas. It is shown that this protocol refines the majority of models from the CASP11 refinement category and that larger conformational changes of the starting structure are possible than with current state of the art methods. The performance of this protocol in the CASP11 experiment is discussed. We found that the quality of the refined model is correlated with the structural variance of the coupled replicas, which therefore provides a good estimator of model quality. Furthermore, some remarkable refinement results are discussed in detail. Proteins 2016; 84(Suppl 1):302-313. © 2015 Wiley Periodicals, Inc. PMID:26441154

  17. Adaptive mesh refinement techniques for 3-D skin electrode modeling.

    PubMed

    Sawicki, Bartosz; Okoniewski, Michal

    2010-03-01

    In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process.

  18. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  19. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  20. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  1. Procedures and computer programs for telescopic mesh refinement using MODFLOW

    USGS Publications Warehouse

    Leake, Stanley A.; Claar, David V.

    1999-01-01

    Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.

  2. Projection of Discontinuous Galerkin Variable Distributions During Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Ballesteros, Carlos; Herrmann, Marcus

    2012-11-01

    Adaptive mesh refinement (AMR) methods decrease the computational expense of CFD simulations by increasing the density of solution cells only in areas of the computational domain that are of interest in that particular simulation. In particular, unstructured Cartesian AMR has several advantages over other AMR approaches, as it does not require the creation of numerous guard-cell blocks, neighboring cell lookups become straightforward, and the hexahedral nature of the mesh cells greatly simplifies the refinement and coarsening operations. The h-refinement from this AMR approach can be leveraged by making use of highly-accurate, but computationally costly methods, such as the Discontinuous Galerkin (DG) numerical method. DG methods are capable of high orders of accuracy while retaining stencil locality--a property critical to AMR using unstructured meshes. However, the use of DG methods with AMR requires the use of special flux and projection operators during refinement and coarsening operations in order to retain the high order of accuracy. The flux and projection operators needed for refinement and coarsening of unstructured Cartesian adaptive meshes using Legendre polynomial test functions will be discussed, and their performance will be shown using standard test cases.

  3. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  4. AMR++: Object-Oriented Parallel Adaptive Mesh Refinement

    SciTech Connect

    Quinlan, D.; Philip, B.

    2000-02-02

    Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.

  5. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  6. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  7. Adaptive mesh refinement for shocks and material interfaces

    SciTech Connect

    Dai, William Wenlong

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  8. A fourth order accurate adaptive mesh refinement method forpoisson's equation

    SciTech Connect

    Barad, Michael; Colella, Phillip

    2004-08-20

    We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.

  9. Block-structured adaptive mesh refinement - theory, implementation and application

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  10. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  11. Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Khokhlov, A. M.

    1997-01-01

    A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.

  12. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  13. Structured adaptive mesh refinement on the connection machine

    SciTech Connect

    Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )

    1993-01-01

    Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.

  14. AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics

    NASA Astrophysics Data System (ADS)

    Plewa, T.; Müller, E.

    2001-08-01

    Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.

  15. Computational relativistic astrophysics with adaptive mesh refinement: Testbeds

    SciTech Connect

    Evans, Edwin; Iyer, Sai; Tao Jian; Wolfmeyer, Randy; Zhang Huimin; Schnetter, Erik; Suen, Wai-Mo

    2005-04-15

    We have carried out numerical simulations of strongly gravitating systems based on the Einstein equations coupled to the relativistic hydrodynamic equations using adaptive mesh refinement (AMR) techniques. We show AMR simulations of NS binary inspiral and coalescence carried out on a workstation having an accuracy equivalent to that of a 1025{sup 3} regular unigrid simulation, which is, to the best of our knowledge, larger than all previous simulations of similar NS systems on supercomputers. We believe the capability opens new possibilities in general relativistic simulations.

  16. Adaptive mesh refinement for 1-dimensional gas dynamics

    SciTech Connect

    Hedstrom, G.; Rodrigue, G.; Berger, M.; Oliger, J.

    1982-01-01

    We consider the solution of the one-dimensional equation of gas-dynamics. Accurate numerical solutions are difficult to obtain on a given spatial mesh because of the existence of physical regions where components of the exact solution are either discontinuous or have large gradient changes. Numerical methods treat these phenomena in a variety of ways. In this paper, the method of adaptive mesh refinement is used. A thorough description of this method for general hyperbolic systems is given elsewhere and only properties of the method pertinent to the system are elaborated.

  17. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  18. Adaptive mesh refinement in curvilinear body-fitted grid systems

    NASA Astrophysics Data System (ADS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-10-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  19. CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM

    SciTech Connect

    Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  20. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  1. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  2. A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.

    SciTech Connect

    Ward, R. C.; Baker, R. S.; Morel, J. E.

    2005-01-01

    A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.

  3. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  4. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  5. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  6. Optimal imaging with adaptive mesh refinement in electrical impedance tomography.

    PubMed

    Molinari, Marc; Blott, Barry H; Cox, Simon J; Daniell, Geoffrey J

    2002-02-01

    In non-linear electrical impedance tomography the goodness of fit of the trial images is assessed by the well-established statistical chi2 criterion applied to the measured and predicted datasets. Further selection from the range of images that fit the data is effected by imposing an explicit constraint on the form of the image, such as the minimization of the image gradients. In particular, the logarithm of the image gradients is chosen so that conductive and resistive deviations are treated in the same way. In this paper we introduce the idea of adaptive mesh refinement to the 2D problem so that the local scale of the mesh is always matched to the scale of the image structures. This improves the reconstruction resolution so that the image constraint adopted dominates and is not perturbed by the mesh discretization. The avoidance of unnecessary mesh elements optimizes the speed of reconstruction without degrading the resulting images. Starting with a mesh scale length of the order of the electrode separation it is shown that, for data obtained at presently achievable signal-to-noise ratios of 60 to 80 dB, one or two refinement stages are sufficient to generate high quality images.

  7. N-Body Code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Yoshii, Yuzuru

    2001-09-01

    We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.

  8. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  9. ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others

    2014-04-01

    This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.

  10. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    NASA Astrophysics Data System (ADS)

    Balsara, D.

    2001-12-01

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  11. Production-quality Tools for Adaptive Mesh RefinementVisualization

    SciTech Connect

    Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes

    2007-10-25

    Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.

  12. Visualizing Geophysical Flow Problems with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Sevre, E. O.; Yuen, D. A.; George, D. L.; Lee, S.

    2011-12-01

    Adaptive Mesh Refinement (AMR) is a technique used in software to decompose a computational domain based on the level of refinement necessary for spatial and temporal calculations. Comparing AMR runs to uniform grids allows for an unbounded gain in computational time. In this paper we will look at techniques for visualizing tsunami simulations that were run with AMR using the GeoClaw [Berger2011-1, Berger2011-2] software. Due to the computational efficiency of AMR we have decided to look into techniques for visualization of AMR data. By having good visualization tools for geoscientists more time can be spent interpreting results and analyzing data. Good visualization tools can be adapted easily to work with a variety of output formats, and the goal of this work is to provide a foundation for geoscientists to work with. In the past year GeoClaw has been used to model the 2011 Tohoku tsunami originating off the coast of Sendai Japan and delivering catastrophic damage to the Fukushima power plant. The aftermath of this single geologic event is still making headlines 4 months after the fact [Fackler2011]. GeoClaw utilizes the shallow water equations to model a variety of flows that range from tsunami to floods to landslides and debris flows [George2011]. With the advanced computations provided by AMR it is important for researchers to visualize and understand ways that are meaningful to both scientists and civilians affected by the potential outcomes of the computation. Special visualization techniques can be used to visualize and look at data generated with AMR. By incorporating these techniques into their software geoscientists will be able to harness powerful computational tools, such as GeoClaw, while also maintaining an informative view of their data.

  13. Adaptive Multiresolution or Adaptive Mesh Refinement? A Case Study for 2D Euler Equations

    SciTech Connect

    Deiterding, Ralf; Domingues, Margarete O.; Gomes, Sonia M.; Roussel, Olivier; Schneider, Kai

    2009-01-01

    We present adaptive multiresolution (MR) computations of the two-dimensional compressible Euler equations for a classical Riemann problem. The results are then compared with respect to accuracy and computational efficiency, in terms of CPU time and memory requirements, with the corresponding finite volume scheme on a regular grid. For the same test-case, we also perform computations using adaptive mesh refinement (AMR) imposing similar accuracy requirements. The results thus obtained are compared in terms of computational overhead and compression of the computational grid, using in addition either local or global time stepping strategies. We preliminarily conclude that the multiresolution techniques yield improved memory compression and gain in CPU time with respect to the adaptive mesh refinement method.

  14. Adaptive Input Reconstruction with Application to Model Refinement, State Estimation, and Adaptive Control

    NASA Astrophysics Data System (ADS)

    D'Amato, Anthony M.

    Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate

  15. Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement

    SciTech Connect

    Anninos, P; Fragile, P C; Salmonson, J D

    2005-05-06

    A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.

  16. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  17. 3D Compressible Melt Transport with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo

    2015-04-01

    Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and

  18. A Spectral Adaptive Mesh Refinement Method for the Burgers equation

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, Leila; Staples, Anne

    2013-03-01

    Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.

  19. Parallel adaptive mesh refinement techniques for plasticity problems

    NASA Technical Reports Server (NTRS)

    Barry, W. J.; Jones, M. T.; Plassmann, P. E.

    1997-01-01

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  20. Star formation with adaptive mesh refinement and magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Collins, David C.

    2009-01-01

    In this thesis, we develop an adaptive mesh refinement (AMR) code including magnetic fields, and use it to perform high resolution simulations of magnetized molecular clouds. The purpose of these simulations is to study present day star formation in the presence of turbulence and magnetic fields. We first present MHDEnzo, the extension of the cosmology and astrophysics code Enzo to include the effects magnetic fields. We use a higher order Godunov Riemann solver for the computation of interface fluxes; constrained transport to compute the electric field from those interface fluxes, which advances the induction equation in a divergence free manner; divergence free reconstruction technique to interpolate the magnetic fields to fine grids; operator splitting to include gravity and cosmological expansion. We present a series of test problems to demonstrate the quality of solution achieved. Additionally, we present several other solvers that were developed along the way. Finally we present the results from several AMR simulations that study isothermal turbulence in the presence of magnetic fields and self gravity. Ten simulations with initial Mach number 8.9 were studied varying several parameters; virial parameter a from 0.52 to 3.1; whether they were continuously stirred or allowed to decay; and the number of refinement levels (4 or 6). Measurements of the density probability density function (PDF) were made, showing both the expected log normal distribution and an additional power law. Measurements of the line of sight magnetic field vs. column density are done, giving excellent agreement with recent observations. The line width vs. size relationship is measured and compared with good agreement to observations, reproducing both turbulent and collapse signatures The core mass distribution is measured and agrees well with observations of Serpens and Perseus core samples, but the power-law distribution in Ophiuchus is not reproduced by our simulations. Finally we

  1. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  2. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  3. Refining Procedures: A Needs Analysis Project at Kuwait University.

    ERIC Educational Resources Information Center

    Basturkmen, Helen

    1998-01-01

    Outlines the procedures followed in the needs analysis (NA) project carried out in 1996 at the College of Petroleum and Engineering at Kuwait University. Focuses on the steps taken in the project and the rationale behind them. Offers an illustration of an NA project and to show the procedural steps involved. (Author/VWL)

  4. An Arbitrary Lagrangian-Eulerian Method with Local Adaptive Mesh Refinement for Modeling Compressible Flow

    NASA Astrophysics Data System (ADS)

    Anderson, Robert; Pember, Richard; Elliott, Noah

    2001-11-01

    We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.

  5. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  6. Adaptive Modeling Procedure Selection by Data Perturbation*

    PubMed Central

    Zhang, Yongli; Shen, Xiaotong

    2015-01-01

    Summary Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy. PMID:26640319

  7. ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing

    NASA Astrophysics Data System (ADS)

    Wise, John H.; Abel, Tom

    2011-07-01

    We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.

  8. Adaptation of Block-Structured Adaptive Mesh Refinement to Particle-In-Cell simulations

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Colella, Phillip; McCorquodale, Peter; Friedman, Alex; Grote, Dave

    2001-10-01

    Particle-In-Cell (PIC) methods which solve the Maxwell equations (or a simplification) on a regular Cartesian grid are routinely used to simulate plasma and particle beam systems. Several techniques have been developed to accommodate irregular boundaries and scale variations. We describe here an ongoing effort to adapt the block-structured Adaptive Mesh Refinement (AMR) algorithm (http://seesar.lbl.gov/AMR/) to the Particle-In-Cell method. The AMR technique connects grids having different resolutions, using interpolation. Special care has to be taken to avoid the introduction of spurious forces close to the boundary of the inner, high-resolution grid, or at least to reduce such forces to an acceptable level. The Berkeley AMR library CHOMBO has been modified and coupled to WARP3d (D.P. Grote et al., Fusion Engineering and Design), 32-33 (1996), 193-200, a PIC code which is used for the development of high current accelerators for Heavy Ion Fusion. The methods and preliminary results will be presented.

  9. An independent refinement and integration procedure in multiregion finite element analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    1992-01-01

    An independent refinement and integration procedure is developed to couple together independently modeled (global and local) regions in a single analysis. The models can have different levels of refinement and along the interface between them the finite element nodes need not coincide with one another. In the local model all the nodes except the nodes at the interface are statically condensed and the reduced stiffness matrix is obtained. For this static condensation a modified frontal solution technique is employed. A spline interpolation function that satisfies the linear isotropic plate bending differential equation is used to relate the local model interface nodal displacements to the global model interface displacements. The proposed independent refinement and integration procedure is evaluated by applying it to two- and three-dimensional cases involving inplane and out-of-plane deformation. The procedure yielded very accurate results for all the examples studied.

  10. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  11. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  12. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  13. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  14. Adaptive Mesh Refinement for High Accuracy Wall Loss Determination in Accelerating Cavity Design

    SciTech Connect

    Ge, L

    2004-06-14

    This paper presents the improvement in wall loss determination when adaptive mesh refinement (AMR) methods are used with the parallel finite element eigensolver Omega3P. We show that significant reduction in the number of degrees of freedom (DOFs) as well as a faster rate of convergence can be achieved as compared with results from uniform mesh refinement in determining cavity wall loss to a desired accuracy. Test cases for which measurements are available will be examined, and comparison with uniform refinement results will be discussed.

  15. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  16. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  17. A fast, robust, and simple implicit method for adaptive time-stepping on adaptive mesh-refinement grids

    NASA Astrophysics Data System (ADS)

    Commerçon, B.; Debout, V.; Teyssier, R.

    2014-03-01

    Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.

  18. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Alternative agreed upon procedures for refiners and importers. 80.128 Section 80.128 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements §...

  19. 40 CFR 80.133 - Agreed-upon procedures for refiners and importers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Agreed-upon procedures for refiners and importers. 80.133 Section 80.133 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements §...

  20. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    SciTech Connect

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.

  1. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  2. Refinement of in vivo surgical procedures for cardiac gene and cell transfer in rats.

    PubMed

    Sato, Motoki; Kerton, Angela; Harding, Sian E

    2009-03-01

    In studies of gene and cell transfer for the treatment of heart disease, direct intramyocardial injection and antegrade intracoronary injection are common methods of delivering biomaterials to the heart. The authors, who carried out these surgical procedures in 377 rats, describe their methodology in detail and discuss surgical refinements that substantially reduced rat mortality. These refinements include a rigorous fluid replacement regimen, use of inhalational anesthesia instead of injectable agents, exposure of the heart without direct contact and use of a chest drainage cannula to remove air from the pleural cavity and prevent lung collapse.

  3. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  4. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    NASA Astrophysics Data System (ADS)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  5. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.

    PubMed

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy.

  6. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods

    PubMed Central

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155

  7. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  8. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  9. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  10. An adaptive grid refinement strategy for the simulation of negative streamers

    SciTech Connect

    Montijn, C. . E-mail: carolynne.montijn@cwi.nl; Hundsdorfer, W. . E-mail: willem.hundsdorfer@cwi.nl; Ebert, U. . E-mail: ute.ebert@cwi.nl

    2006-12-10

    The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers.

  11. Integration over two-dimensional Brillouin zones by adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Henk, J.

    2001-07-01

    Adaptive mesh-refinement (AMR) schemes for integration over two-dimensional Brillouin zones are presented and their properties are investigated in detail. A salient feature of these integration techniques is that the grid of sampling points is automatically adapted to the integrand in such a way that regions with high accuracy demand are sampled with high density, while the other regions are sampled with low density. This adaptation may save a sizable amount of computation time in comparison with those integration methods without mesh refinement. Several AMR schemes for one- and two-dimensional integration are introduced. As an application, the spin-dependent conductance of electronic tunneling through planar junctions is investigated and discussed with regard to Brillouin zone integration.

  12. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  13. Adaptive Distributed Environment for Procedure Training (ADEPT)

    NASA Technical Reports Server (NTRS)

    Domeshek, Eric; Ong, James; Mohammed, John

    2013-01-01

    ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.

  14. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  15. Defect structure of a nematic liquid crystal around a spherical particle: adaptive mesh refinement approach.

    PubMed

    Fukuda, Jun-ichi; Yoneya, Makoto; Yokoyama, Hiroshi

    2002-04-01

    We investigate numerically the structure of topological defects close to a spherical particle immersed in a uniformly aligned nematic liquid crystal. To this end we have implemented an adaptive mesh refinement scheme in an axi-symmetric three-dimensional system, which makes it feasible to take into account properly the large length scale difference between the particle and the topological defects. The adaptive mesh refinement scheme proves to be quite efficient and useful in the investigation of not only the macroscopic properties such as the defect position but also the fine structure of defects. It can be shown that a hyperbolic hedgehog that accompanies a particle with strong homeotropic anchoring takes the structure of a ring.

  16. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.

    PubMed

    Li, Zhilin; Song, Peng

    2013-06-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method.

  17. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  18. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  19. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  20. COMET-AR User's Manual: COmputational MEchanics Testbed with Adaptive Refinement

    NASA Technical Reports Server (NTRS)

    Moas, E. (Editor)

    1997-01-01

    The COMET-AR User's Manual provides a reference manual for the Computational Structural Mechanics Testbed with Adaptive Refinement (COMET-AR), a software system developed jointly by Lockheed Palo Alto Research Laboratory and NASA Langley Research Center under contract NAS1-18444. The COMET-AR system is an extended version of an earlier finite element based structural analysis system called COMET, also developed by Lockheed and NASA. The primary extensions are the adaptive mesh refinement capabilities and a new "object-like" database interface that makes COMET-AR easier to extend further. This User's Manual provides a detailed description of the user interface to COMET-AR from the viewpoint of a structural analyst.

  1. Logically rectangular finite volume methods with adaptive refinement on the sphere.

    PubMed

    Berger, Marsha J; Calhoun, Donna A; Helzel, Christiane; LeVeque, Randall J

    2009-11-28

    The logically rectangular finite volume grids for two-dimensional partial differential equations on a sphere and for three-dimensional problems in a spherical shell introduced recently have nearly uniform cell size, avoiding severe Courant number restrictions. We present recent results with adaptive mesh refinement using the GeoClaw software and demonstrate well-balanced methods that exactly maintain equilibrium solutions, such as shallow water equations for an ocean at rest over arbitrary bathymetry.

  2. A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics

    SciTech Connect

    Puckett, E.G. ); Saltzman, J.S. )

    1991-08-12

    Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.

  3. Interactive solution-adaptive grid generation procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.

    1992-01-01

    TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.

  4. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  5. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Astrophysics Data System (ADS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-11-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  6. Implementation and application of adaptive mesh refinement for thermochemical mantle convection studies

    NASA Astrophysics Data System (ADS)

    Leng, Wei; Zhong, Shijie

    2011-04-01

    Numerical modeling of mantle convection is challenging. Owing to the multiscale nature of mantle dynamics, high resolution is often required in localized regions, with coarser resolution being sufficient elsewhere. When investigating thermochemical mantle convection, high resolution is required to resolve sharp and often discontinuous boundaries between distinct chemical components. In this paper, we present a 2-D finite element code with adaptive mesh refinement techniques for simulating compressible thermochemical mantle convection. By comparing model predictions with a range of analytical and previously published benchmark solutions, we demonstrate the accuracy of our code. By refining and coarsening the mesh according to certain criteria and dynamically adjusting the number of particles in each element, our code can simulate such problems efficiently, dramatically reducing the computational requirements (in terms of memory and CPU time) when compared to a fixed, uniform mesh simulation. The resolving capabilities of the technique are further highlighted by examining plume-induced entrainment in a thermochemical mantle convection simulation.

  7. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

  8. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

  9. Single-pass GPU-raycasting for structured adaptive mesh refinement data

    NASA Astrophysics Data System (ADS)

    Kaehler, Ralf; Abel, Tom

    2013-01-01

    Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.

  10. Development of three-dimensional hydrodynamical and MHD codes using Adaptive Mesh Refinement scheme with TVD

    NASA Astrophysics Data System (ADS)

    den, M.; Yamashita, K.; Ogawa, T.

    A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.

  11. Geophysical astrophysical spectral-element adaptive refinement (GASpAR): Object-oriented h-adaptive fluid dynamics simulation

    NASA Astrophysics Data System (ADS)

    Rosenberg, Duane; Fournier, Aimé; Fischer, Paul; Pouquet, Annick

    2006-06-01

    An object-oriented geophysical and astrophysical spectral-element adaptive refinement (GASpAR) code is introduced. Like most spectral-element codes, GASpAR combines finite-element efficiency with spectral-method accuracy. It is also designed to be flexible enough for a range of geophysics and astrophysics applications where turbulence or other complex multiscale problems arise. The formalism accommodates both conforming and non-conforming elements. Several aspects of this code derive from existing methods, but here are synthesized into a new formulation of dynamic adaptive refinement (DARe) of non-conforming h-type. As a demonstration of the code, several new 2D test cases are introduced that have time-dependent analytic solutions and exhibit localized flow features, including the 2D Burgers equation with straight, curved-radial and oblique-colliding fronts. These are proposed as standard test problems for comparable DARe codes. Quantitative errors are reported for 2D spatial and temporal convergence of DARe.

  12. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  13. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  14. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  15. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  16. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  17. Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results

    NASA Technical Reports Server (NTRS)

    Gombosi, Tamas I.; Powell, Kenneth G.; De Zeeuw, Darren L.

    1994-01-01

    The first results of an axisymmetric magnetohydrodynamic (MHD) model of the interaction of an expanding cometary atmosphere with the solar wind are presented. The model assumes that far upstream the plasma flow lines are parallel to the magnetic field vector. The effects of mass loading and ion-neutral friction are taken into account by the governing equations, whcih are solved on an adaptively refined unstructured grid using a Monotone Upstream Centered Schemes for Conservative Laws (MUSCL)-type numerical technique. The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the near nucleus of the comet. The main findingsare the following: (1) A shock is formed approximately = 0.45 Mkm upstream of the comet (its location is controlled by the sonic and Alfvenic Mach numbers of the ambient solar wind flow and by the cometary mass addition rate). (2) A contact surface is formed approximately = 5,600 km upstream of the nucleus separating an outward expanding cometary ionosphere from the nearly stagnating solar wind flow. The location of the contact surface is controlled by the upstream flow conditions, the mass loading rate and the ion-neutral drag. The contact surface is also the boundary of the diamagnetic cavity. (3) A closed inner shock terminates the supersonic expansion of the cometary ionosphere. This inner shock is closer to the nucleus on dayside than on the nightside.

  18. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  19. Isosurface Computation Made Simple: Hardware acceleration,Adaptive Refinement and tetrahedral Stripping

    SciTech Connect

    Pascucci, V

    2004-02-18

    This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.

  20. Simulating Multi-scale Fluid Flows Using Adaptive Mesh Refinement Methods

    NASA Astrophysics Data System (ADS)

    Rowe, Kristopher; Lamb, Kevin

    2015-11-01

    When modelling flows with disparate length scales one must use a computational mesh that is fine enough to capture the smallest phenomena of interest. Traditional computational fluid dynamics models apply a mesh of uniform resolution to the entire computational domain; however, if the smallest scales of interest are isolated much of the computational resources used in these simulations will be wasted in regions where they are not needed. Adaptive mesh refinement methods seek to only apply resolution where it is needed. Beginning with a single coarse grid, a nested hierarchy of block structured grids is built in regions of the fluid flow where more resolution is necessary. As the fluid flow varies in time this hierarchy of grids is dynamically rebuilt to follow the phenomena of interest. Through the modelling of the interaction of vortices with wall boundary layers, it will be demonstrated that adaptive mesh refinement methods will produce equivalent results to traditional single resolution codes while using less processors, memory, and wall-clock time. Additionally, it is possible to model such flows to higher Reynolds numbers than have been feasible previously. This work was supported by NSERC and SHARCNET.

  1. A new adaptive mesh refinement data structure with an application to detonation

    NASA Astrophysics Data System (ADS)

    Ji, Hua; Lien, Fue-Sang; Yee, Eugene

    2010-11-01

    A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.

  2. Refinements in husbandry, care and common procedures for non-human primates: Ninth report of the BVAAWF/FRAME/RSPCA/UFAW Joint Working Group on Refinement.

    PubMed

    Jennings, M; Prescott, M J; Buchanan-Smith, Hannah M; Gamble, Malcolm R; Gore, Mauvis; Hawkins, Penny; Hubrecht, Robert; Hudson, Shirley; Jennings, Maggy; Keeley, Joanne R; Morris, Keith; Morton, David B; Owen, Steve; Pearce, Peter C; Prescott, Mark J; Robb, David; Rumble, Rob J; Wolfensohn, Sarah; Buist, David

    2009-04-01

    Preface Whenever animals are used in research, minimizing pain and distress and promoting good welfare should be as important an objective as achieving the experimental results. This is important for humanitarian reasons, for good science, for economic reasons and in order to satisfy the broad legal principles in international legislation. It is possible to refine both husbandry and procedures to minimize suffering and improve welfare in a number of ways, and this can be greatly facilitated by ensuring that up-to-date information is readily available. The need to provide such information led the British Veterinary Association Animal Welfare Foundation (BVAAWF), the Fund for the Replacement of Animals in Medical Experiments (FRAME), the Royal Society for the Prevention of Cruelty to Animals (RSPCA) and the Universities Federation for Animal Welfare (UFAW) to establish a Joint Working Group on Refinement (JWGR) in the UK. The chair is Professor David Morton and the secretariat is provided by the RSPCA. This report is the ninth in the JWGR series. The RSPCA is opposed to the use of animals in experiments that cause pain, suffering, distress or lasting harm and together with FRAME has particular concerns about the continued use of non-human primates. The replacement of primate experiments is a primary goal for the RSPCA and FRAME. However, both organizations share with others in the Working Group, the common aim of replacing primate experiments wherever possible, reducing suffering and improving welfare while primate use continues. The reports of the refinement workshops are intended to help achieve these aims. This report produced by the British Veterinary Association Animal Welfare Foundation (BVAAWF)/Fund for the Replacement of Animals in Medical Experiments (FRAME)/Royal Society for the Prevention of Cruelty to Animals (RSPCA)/Universities Federation for Animal Welfare (UFAW) Joint Working Group on Refinement (JWGR) sets out practical guidance on refining the

  3. Standard and goal-oriented adaptive mesh refinement applied to radiation transport on 2D unstructured triangular meshes

    SciTech Connect

    Yaqi Wang; Jean C. Ragusa

    2011-02-01

    Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.

  4. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  5. Relativistic magnetohydrodynamics in dynamical spacetimes: A new adaptive mesh refinement implementation

    SciTech Connect

    Etienne, Zachariah B.; Liu, Yuk Tung; Shapiro, Stuart L.

    2010-10-15

    We have written and tested a new general relativistic magnetohydrodynamics code, capable of evolving magnetohydrodynamics (MHD) fluids in dynamical spacetimes with adaptive-mesh refinement (AMR). Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the Baumgarte-Shapiro Shibata-Nakamura formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential, which exists on a grid that is staggered with respect to the hydrodynamic and metric variables. The divergenceless constraint {nabla}{center_dot}B=0 is enforced by the curl of the vector potential. Our MHD scheme is fully compatible with AMR, so that fluids at AMR refinement boundaries maintain {nabla}{center_dot}B=0. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We present code validation test results, both in Minkowski and curved spacetimes. They include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  6. Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow

    SciTech Connect

    Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2011-06-22

    In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.

  7. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  8. Numerical Relativistic Magnetohydrodynamics with ADER Discontinuous Galerkin methods on adaptively refined meshes.

    NASA Astrophysics Data System (ADS)

    Zanotti, O.; Dumbser, M.; Fambri, F.

    2016-05-01

    We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.

  9. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    SciTech Connect

    R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin

    2003-10-20

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.

  10. Dynamic Implicit 3D Adaptive Mesh Refinement for Non-Equilibrium Radiation Diffusion

    SciTech Connect

    Philip, Bobby; Wang, Zhen; Berrill, Mark A; Rodriguez Rodriguez, Manuel; Pernice, Michael

    2014-01-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multiphysics systems: implicit time integration for efficient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent linear solver convergence.

  11. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, K.T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  12. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  13. Level-by-level artificial viscosity and visualization for MHD simulation with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Hatori, Tomoharu; Ito, Atsushi M.; Nunami, Masanori; Usui, Hideyuki; Miura, Hideaki

    2016-08-01

    We propose a numerical method to determine the artificial viscosity in magnetohydrodynamics (MHD) simulations with adaptive mesh refinement (AMR) method, where the artificial viscosity is adaptively changed due to the resolution level of the AMR hierarchy. Although the suitable value of the artificial viscosity depends on the governing equations and the model of target problem, it can be determined by von Neumann stability analysis. By means of the new method, "level-by-level artificial viscosity method," MHD simulations of Rayleigh-Taylor instability (RTI) are carried out with the AMR method. The validity of the level-by-level artificial viscosity method is confirmed by the comparison of the linear growth rates of RTI between the AMR simulations and the simple simulations with uniform grid and uniform artificial viscosity whose resolution is the same as that in the highest level of the AMR simulation. Moreover, in the nonlinear phase of RTI, the secondary instability is clearly observed where the hierarchical data structure of AMR calculation is visualized as high resolution region floats up like terraced fields. In the applications of the method to general fluid simulations, the growth of small structures can be sufficiently reproduced, while the divergence of numerical solutions can be suppressed.

  14. Dynamically adaptive mesh refinement technique for image reconstruction in optical tomography.

    PubMed

    Soloviev, Vadim Y; Krasnosselskaia, Lada V

    2006-04-20

    A novel adaptive mesh technique is introduced for problems of image reconstruction in luminescence optical tomography. A dynamical adaptation of the three-dimensional scheme based on the finite-volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. The arbitrary shape of the bounding surface is handled by an additional refinement of computational cells on the boundary. Dynamical shrinking of the search volume is introduced to improve computational performance and accuracy while locating the luminescence target. Light propagation in the medium is modeled by the telegraph equation, and the image-reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the introduced method are demonstrated for image reconstruction of one and two spherical luminescent objects embedded within a breastlike tissue phantom. Experimental measurements are simulated by the solution of the forward problem on a grid of 5x5 light guides attached to the surface of the phantom.

  15. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  16. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.128... gasoline was produced or imported. Obtain the refiner's or importer's internal laboratory analyses for each... imported. Obtain refiner's or importer's internal lab analysis for each batch and agree the consistency...

  17. Development of a scalable gas-dynamics solver with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Korkut, Burak

    There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be

  18. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  19. Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics

    SciTech Connect

    Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park

    2007-04-02

    Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.

  20. MASS AND MAGNETIC DISTRIBUTIONS IN SELF-GRAVITATING SUPER-ALFVENIC TURBULENCE WITH ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao

    2011-04-10

    In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.

  1. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

    SciTech Connect

    Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

    2011-03-15

    The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

  2. ADAPTIVE MESH REFINEMENT SIMULATIONS OF GALAXY FORMATION: EXPLORING NUMERICAL AND PHYSICAL PARAMETERS

    SciTech Connect

    Hummels, Cameron B.; Bryan, Greg L.

    2012-04-20

    We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a {Lambda}-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.

  3. Efficient simulation of three-dimensional anisotropic cardiac tissue using an adaptive mesh refinement method.

    PubMed

    Cherry, Elizabeth M; Greenside, Henry S; Henriquez, Craig S

    2003-09-01

    A recently developed space-time adaptive mesh refinement algorithm (AMRA) for simulating isotropic one- and two-dimensional excitable media is generalized to simulate three-dimensional anisotropic media. The accuracy and efficiency of the algorithm is investigated for anisotropic and inhomogeneous 2D and 3D domains using the Luo-Rudy 1 (LR1) and FitzHugh-Nagumo models. For a propagating wave in a 3D slab of tissue with LR1 membrane kinetics and rotational anisotropy comparable to that found in the human heart, factors of 50 and 30 are found, respectively, for the speedup and for the savings in memory compared to an algorithm using a uniform space-time mesh at the finest resolution of the AMRA method. For anisotropic 2D and 3D media, we find no reduction in accuracy compared to a uniform space-time mesh. These results suggest that the AMRA will be able to simulate the 3D electrical dynamics of canine ventricles quantitatively for 1 s using 32 1-GHz Alpha processors in approximately 9 h.

  4. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-20

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s{sup -1} and 200 km, respectively), and we discuss the potential consequences for the first flames.

  5. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique.

    PubMed

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  6. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  7. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    SciTech Connect

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-07-10

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  8. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    SciTech Connect

    B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  9. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  10. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  11. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  12. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    NASA Astrophysics Data System (ADS)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  13. Initiating technical refinements in high-level golfers: Evidence for contradictory procedures.

    PubMed

    Carson, Howie J; Collins, Dave; Richards, Jim

    2016-01-01

    When developing motor skills there are several outcomes available to an athlete depending on their skill status and needs. Whereas the skill acquisition and performance literature is abundant, an under-researched outcome relates to the refinement of already acquired and well-established skills. Contrary to current recommendations for athletes to employ an external focus of attention and a representative practice design,  Carson and  Collins' (2011) [Refining and regaining skills in fixation/diversification stage performers: The Five-A Model. International Review of Sport and Exercise Psychology, 4, 146-167. doi: 10.1080/1750984x.2011.613682 ] Five-A Model requires an initial narrowed internal focus on the technical aspect needing refinement: the implication being that environments which limit external sources of information would be beneficial to achieving this task. Therefore, the purpose of this paper was to (1) provide a literature-based explanation for why techniques counter to current recommendations may be (temporarily) appropriate within the skill refinement process and (2) provide empirical evidence for such efficacy. Kinematic data and self-perception reports are provided from high-level golfers attempting to consciously initiate technical refinements while executing shots onto a driving range and into a close proximity net (i.e. with limited knowledge of results). It was hypothesised that greater control over intended refinements would occur when environmental stimuli were reduced in the most unrepresentative practice condition (i.e. hitting into a net). Results confirmed this, as evidenced by reduced intra-individual movement variability for all participants' individual refinements, despite little or no difference in mental effort reported. This research offers coaches guidance when working with performers who may find conscious recall difficult during the skill refinement process. PMID:26428876

  14. Using the Chombo Adaptive Mesh Refinement Model in Shallow Water Mode to Simulate Interactions of Tropical Cyclone-like Vortices

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.

    2015-12-01

    Complex multi-scale atmospheric phenomena such as tropical cyclones challenge the coarse uniform grids of convectional climate models. Adaptive mesh refinement (AMR) techniques seek to mitigate these problems by providing sufficiently high-resolution grid patches only over features of interests while limiting the computational burden of requiring such resolutions globally. One such model is the non-hydrostatic, finite-volume Chombo-AMR general circulation model (GCM), which implements refinement in both space and time on a cubed-sphere grid. The 2D shallow-water equations exhibit many of the complexities of 3D GCM dynamical cores and serve as an effective method for testing the dynamical core and the refinement strategies of adaptive atmospheric models. We implement a shallow-water test case consisting of a pair of interacting tropical cyclone-like vortices. Small changes in the initial conditions can lead to a variety of interactions that develop fine-scale spiral band structures and large-scale wave trains. We investigate the accuracy and efficiency of AMR's ability to capture and effectively follow the evolution of the vortices in time. These simulations serve to test the effectiveness of refinement for both static and dynamic grid configurations as well as the sensitivity of the model results to the refinement criteria.

  15. An Innovative Adaptive Pushover Procedure Based on Storey Shear

    SciTech Connect

    Shakeri, Kazem; Shayanfar, Mohsen A.

    2008-07-08

    Since the conventional pushover analyses are unable to consider the effect of the higher modes and progressive variation in dynamic properties, recent years have witnessed the development of some advanced adaptive pushover methods. However in these methods, using the quadratic combination rules to combine the modal forces result in a positive value in load pattern at all storeys and the reversal sign of the modes is removed; consequently these methods do not have a major advantage over their non-adaptive counterparts. Herein an innovative adaptive pushover method based on storey shear is proposed which can take into account the reversal signs in higher modes. In each storey the applied load pattern is derived from the storey shear profile; consequently, the sign of the applied loads in consecutive steps could be changed. Accuracy of the proposed procedure is examined by applying it to a 20-storey steel building. It illustrates a good estimation of the peak response in inelastic phase.

  16. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  17. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  18. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  19. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  20. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  1. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... gasoline or conventional sub-octane blendstock, and the compliance calculations which include oxygenate... blended with conventional gasoline or sub-octane blendstock that was produced or imported by the refiner... oxygenate was blended with conventional gasoline or conventional sub-octane blendstock that was produced...

  2. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... gasoline or conventional sub-octane blendstock, and the compliance calculations which include oxygenate... blended with conventional gasoline or sub-octane blendstock that was produced or imported by the refiner... oxygenate was blended with conventional gasoline or conventional sub-octane blendstock that was produced...

  3. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gasoline or conventional sub-octane blendstock, and the compliance calculations which include oxygenate... blended with conventional gasoline or sub-octane blendstock that was produced or imported by the refiner... oxygenate was blended with conventional gasoline or conventional sub-octane blendstock that was produced...

  4. A STABLE, ACCURATE METHODOLOGY FOR HIGH MACH NUMBER, STRONG MAGNETIC FIELD MHD TURBULENCE WITH ADAPTIVE MESH REFINEMENT: RESOLUTION AND REFINEMENT STUDIES

    SciTech Connect

    Li, Pak Shing; Klein, Richard I.; Martin, Daniel F.; McKee, Christopher F. E-mail: klein@astron.berkeley.edu E-mail: cmckee@astro.berkeley.edu

    2012-02-01

    Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma {beta}{sub 0} of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers (M{sub rms}= 17.3) and smaller plasma beta ({beta}{sub 0} = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.

  5. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  6. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  7. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  8. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  9. pH-zone-refining counter-current chromatography: Origin, mechanism, procedure and applications✩

    PubMed Central

    Ito, Yoichiro

    2012-01-01

    Since 1980, high-speed counter-current chromatography (HSCCC) has been used for separation and purification of natural and synthetic products in a standard elution mode. In 1991, a novel elution mode called pH-zone refining CCC was introduced from an incidental discovery that an organic acid in the sample solution formed the sharp peak of an acid analyte. The cause of this sharp peak formation was found to be bromoacetic acid present in the sample solution which formed a sharp trailing border to trap the acidic analyte. Further studies on the separation of DNP-amino acids with three spacer acids in the stationary phase revealed that increased sample size resulted in the formation of fused rectangular peaks, each preserving high purity and zone pH with sharp boundaries. The mechanism of this phenomenon was found to be the formation of a sharp trailing border of an acid (retainer) in the column which moves at a lower rate than that of the mobile phase. In order to facilitate the application of the method, a new method was devised using a set of retainer and eluter to form a sharp retainer rear border which moves through the column at a desired rate regardless of the composition of the two-phase solvent system. This was achieved by adding the retainer in the stationary phase and the eluter in the mobile phase at a given molar ratio. Using this new method the hydrodynamics of pH-zone-refining CCC was diagrammatically illustrated by three acidic samples. In this review paper, typical pH-zone-refining CCC separations were presented, including affinity separations with a ligand and a separation of a racemic mixture using a chiral selector in the stationary phase. Major characteristics of pH-zone-refining CCC over conventional HSCCC are as follows: the sample loading capacity is increased over 10 times; fractions are highly concentrated near saturation level; yield is improved by increasing the sample size; minute charged compounds are concentrated and detected at the peak

  10. Adaptive Mesh Refinement Cosmological Simulations of Cosmic Rays in Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Skillman, Samuel William

    2013-12-01

    Galaxy clusters are unique astrophysical laboratories that contain many thermal and non-thermal phenomena. In particular, they are hosts to cosmic shocks, which propagate through the intracluster medium as a by-product of structure formation. It is believed that at these shock fronts, magnetic field inhomogeneities in a compressing flow may lead to the acceleration of cosmic ray electrons and ions. These relativistic particles decay and radiate through a variety of mechanisms, and have observational signatures in radio, hard X-ray, and Gamma-ray wavelengths. We begin this dissertation by developing a method to find shocks in cosmological adaptive mesh refinement simulations of structure formation. After describing the evolution of shock properties through cosmic time, we make estimates for the amount of kinetic energy processed and the total number of cosmic ray protons that could be accelerated at these shocks. We then use this method of shock finding and a model for the acceleration of and radio synchrotron emission from cosmic ray electrons to estimate the radio emission properties in large scale structures. By examining the time-evolution of the radio emission with respect to the X-ray emission during a galaxy cluster merger, we find that the relative timing of the enhancements in each are important consequences of the shock dynamics. By calculating the radio emission expected from a given mass galaxy cluster, we make estimates for future large-area radio surveys. Next, we use a state-of-the-art magnetohydrodynamic simulation to follow the electron acceleration in a massive merging galaxy cluster. We use the magnetic field information to calculate not only the total radio emission, but also create radio polarization maps that are compared to recent observations. We find that we can naturally reproduce Mpc-scale radio emission that resemble many of the known double radio relic systems. Finally, motivated by our previous studies, we develop and introduce a

  11. An Adaptive Ridge Procedure for L0 Regularization

    PubMed Central

    Frommlet, Florian; Nuel, Grégory

    2016-01-01

    Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR), where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data. PMID:26849123

  12. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  13. COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.

    2012-05-08

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of

  14. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  15. Symmetry-adapted Wannier functions in the maximal localization procedure

    NASA Astrophysics Data System (ADS)

    Sakuma, R.

    2013-06-01

    A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.

  16. Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems

    SciTech Connect

    Turinsky, Paul

    2015-02-09

    This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial

  17. Radiographic skills learning: procedure simulation using adaptive hypermedia.

    PubMed

    Costaridou, L; Panayiotakis, G; Pallikarakis, N; Proimos, B

    1996-10-01

    The design and development of a simulation tool supporting learning of radiographic skills is reported. This tool has by textual, graphical and iconic resources, organized according to a building-block, adaptive hypermedia approach, which is described and supported by an image base of radiographs. It offers interactive user-controlled simulation of radiographic imaging procedures. The development is based on a commercially available environment (Toolbook 3.0, Asymetrix Corporation). The core of the system is an attributed precedence (priority) graph, which represents a task outline (concept and resources structure), which is dynamically adjusted to selected procedures. The user interface imitates a conventional radiography system, i.e. operating console, tube, table, patient and cassette. System parameters, such as patient positioning, focus-to-patient distance, magnification, field dimensions, tube voltage and mAs are under user control. Their effects on image quality are presented, by means of an image base acquired under controlled exposure conditions. Innovative use of hypermedia, computer based learning and simulation principles and technology in the development of this tool resulted in an enhanced interactive environment providing radiographic parameter control and visualization of parameter effects on image quality. PMID:9038530

  18. Three-Dimensional Parallel Adaptive Mesh Refinement Simulations of Shock-Driven Turbulent Mixing in Plane and Converging Geometries

    SciTech Connect

    Lombardini, Manuel; Deiterding, Ralf

    2010-01-01

    This paper presents the use of a dynamically adaptive mesh refinement strategy for the simulations of shock-driven turbulent mixing. Large-eddy simulations are necessary due the high Reynolds number turbulent regime. In this approach, the large scales are simulated directly and small scales at which the viscous dissipation occurs are modeled. A low-numerical centered finite-difference scheme is used in turbulent flow regions while a shock-capturing method is employed to capture shocks. Three-dimensional parallel simulations of the Richtmyer-Meshkov instability performed in plane and converging geometries are described.

  19. Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams

    SciTech Connect

    Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.

    2003-11-04

    Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.

  20. Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM

    SciTech Connect

    Miniatii, Francesco; Martin, Daniel

    2011-05-24

    We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  2. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  4. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  5. Nematic liquid crystal around a spherical particle: Investigation of the defect structure and its stability using adaptive mesh refinement.

    PubMed

    Fukuda, Jun-Ichi; Yoneya, Makoto; Yokoyama, Hiroshi

    2004-01-01

    We investigate the orientation profile and the structure of topological defects of a nematic liquid crystal around a spherical particle using an adaptive mesh refinement scheme developed by us previously. The previous work [J. Fukuda et al., Phys. Rev. E 65, 041709 (2002)] was devoted to the investigation of the fine structure of a hyperbolic hedgehog defect that the particle accompanies and in this paper we present the equilibrium profile of the Saturn ring configuration. The radius of the Saturn ring r(d) in units of the particle radius R(0) increases weakly with the increase of Epsilon, the ratio of the nematic coherence length to R(0). Next we discuss the energetic stability of a hedgehog and a Saturn ring. The use of adaptive mesh refinement scheme together with a tensor orientational order parameter Q (alpha, beta) allows us to calculate the elastic energy of a nematic liquid crystal without any assumption of the structure and the energy of the defect core as in the previous similar studies. The reduced free energy of a nematic liquid crystal, F= F/L1RO, with L(1) being the elastic constant, is almost independent of Epsilon in the hedgehog configuration, while it shows a logarithmic dependence in the Saturn ring configuration. This result clearly indicates that the energetic stability of a hedgehog to a Saturn ring for a large particle is definitely attributed to the large defect energy of the Saturn ring with a large radius.

  6. Interface Reconstruction in Two-and Three-Dimensional Arbitrary Lagrangian-Euderian Adaptive Mesh Refinement Simulations

    SciTech Connect

    Masters, N D; Anderson, R W; Elliott, N S; Fisher, A C; Gunney, B T; Koniges, A E

    2007-08-28

    Modeling of high power laser and ignition facilities requires new techniques because of the higher energies and higher operational costs. We report on the development and application of a new interface reconstruction algorithm for chamber modeling code that combines ALE (Arbitrary Lagrangian Eulerian) techniques with AMR (Adaptive Mesh Refinement). The code is used for the simulation of complex target elements in the National Ignition Facility (NIF) and other similar facilities. The interface reconstruction scheme is required to adequately describe the debris/shrapnel (including fragments or droplets) resulting from energized materials that could affect optics or diagnostic sensors. Traditional ICF modeling codes that choose to implement ALE + AMR techniques will also benefit from this new scheme. The ALE formulation requires material interfaces (including those of generated particles or droplets) to be tracked. We present the interface reconstruction scheme developed for NIF's ALE-AMR and discuss how it is affected by adaptive mesh refinement and the ALE mesh. Results of the code are shown for NIF and OMEGA target configurations.

  7. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  8. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  9. Laser ray tracing in a parallel arbitrary Lagrangian-Eulerian adaptive mesh refinement hydrocode

    NASA Astrophysics Data System (ADS)

    Masters, N. D.; Kaiser, T. B.; Anderson, R. W.; Eder, D. C.; Fisher, A. C.; Koniges, A. E.

    2010-08-01

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray tracing in ALE-AMR. We present the basic concepts of laser ray tracing and our approach to efficiently traverse the adaptive mesh hierarchy.

  10. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  11. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience.

    PubMed

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197

  12. woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement

    NASA Astrophysics Data System (ADS)

    Assmann, E.; Wissgott, P.; Kuneš, J.; Toschi, A.; Blaha, P.; Held, K.

    2016-05-01

    We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the WIEN2WANNIER framework and allows including a local many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package WIEN2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO3.

  13. High-Performance Reactive Fluid Flow Simulations Using Adaptive Mesh Refinement on Thousands of Processors

    NASA Astrophysics Data System (ADS)

    Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.

    We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.

  14. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience.

    PubMed

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being.

  15. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience

    PubMed Central

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197

  16. A low numerical dissipation patch-based adaptive mesh refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2007-01-01

    We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.

  17. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    SciTech Connect

    Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  18. ADAPTIVELY REFINED LARGE EDDY SIMULATIONS OF A GALAXY CLUSTER: TURBULENCE MODELING AND THE PHYSICS OF THE INTRACLUSTER MEDIUM

    SciTech Connect

    Maier, A.; Schmidt, W.; Iapichino, L.; Niemeyer, J. C.

    2009-12-10

    We present a numerical scheme for modeling unresolved turbulence in cosmological adaptive mesh refinement codes. As a first application, we study the evolution of turbulence in the intracluster medium (ICM) and in the core of a galaxy cluster. Simulations with and without subgrid scale (SGS) model are compared in detail. Since the flow in the ICM is subsonic, the global turbulent energy contribution at the unresolved length scales is smaller than 1% of the internal energy. We find that the production of turbulence is closely correlated with merger events occurring in the cluster environment, and its dissipation locally affects the cluster energy budget. Because of this additional source of dissipation, the core temperature is larger and the density is smaller in the presence of SGS turbulence than in the standard adiabatic run, resulting in a higher entropy core value.

  19. On solving the 3-D phase field equations by employing a parallel-adaptive mesh refinement (Para-AMR) algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Xiong, S. M.

    2015-05-01

    An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.

  20. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  1. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    SciTech Connect

    Masters, N D; Kaiser, T B; Anderson, R W; Eder, D C; Fisher, A C; Koniges, A E

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  2. PBR: a heavy-atom refinement and phasing procedure to reduce phase bias when heavy-atom derivatives contain common sites.

    PubMed

    Chabrière, E; Charon, M; Vellieux, F M

    1999-02-01

    A procedure, called PBR (phase-bias reduction), has been developed to properly refine heavy-atom derivatives and to generate less biased heavy-atom phases when these derivatives contain common heavy-atom sites. Two independent events are obtained by splitting the refinement and phasing calculations into two stages, the first in which one of the derivatives having common sites is used together with the native amplitudes and the second in which both derivatives with common sites are used simultaneously, with one of them being used as the native data set. Improved centroid phases and the corresponding figures of merit are obtained by phase combination. This procedure has been used in the structure determination of the iron-cluster-containing protein -pyruvate-ferredoxin oxidoreductase. When the common heavy-atom sites are properly treated by the PBR procedure, the resulting calculated centroid phases are improved with respect to classical heavy-atom refinement centroid phases where all derivatives are refined together. This leads to improved electron-density distributions, since anomalous difference Fourier maps calculated with the PBR-refined centroid phases and corresponding figures of merit show more clearly the positions of the iron sites.

  3. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  4. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  5. Refining the calculation procedure for estimating the influence of flashing steam in steam turbine heaters on the increase of rotor rotation frequency during rejection of electric load

    NASA Astrophysics Data System (ADS)

    Novoselov, V. B.; Shekhter, M. V.

    2012-12-01

    A refined procedure for estimating the effect the flashing of condensate in a steam turbine's regenerative and delivery-water heaters on the increase of rotor rotation frequency during rejection of electric load is presented. The results of calculations carried out according to the proposed procedure as applied to the delivery-water and regenerative heaters of a T-110/120-12.8 turbine are given.

  6. Adapting Assessment Procedures for Delivery via an Automated Format.

    ERIC Educational Resources Information Center

    Kelly, Karen L.; And Others

    The Office of Personnel Management (OPM) decided to explore alternative examining procedures for positions covered by the Administrative Careers with America (ACWA) examination. One requirement for new procedures was that they be automated for use with OPM's recently developed Microcomputer Assisted Rating System (MARS), a highly efficient system…

  7. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  8. A Domain-Decomposed Multi-Level Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)

    1998-01-01

    The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.

  9. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  10. Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.

    PubMed

    Jeong, Haeyoung; Lee, Sang J; Kim, Pil

    2016-01-01

    Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions.

  11. Procedure for Adaptive Laboratory Evolution of Microorganisms Using a Chemostat.

    PubMed

    Jeong, Haeyoung; Lee, Sang J; Kim, Pil

    2016-01-01

    Natural evolution involves genetic diversity such as environmental change and a selection between small populations. Adaptive laboratory evolution (ALE) refers to the experimental situation in which evolution is observed using living organisms under controlled conditions and stressors; organisms are thereby artificially forced to make evolutionary changes. Microorganisms are subject to a variety of stressors in the environment and are capable of regulating certain stress-inducible proteins to increase their chances of survival. Naturally occurring spontaneous mutations bring about changes in a microorganism's genome that affect its chances of survival. Long-term exposure to chemostat culture provokes an accumulation of spontaneous mutations and renders the most adaptable strain dominant. Compared to the colony transfer and serial transfer methods, chemostat culture entails the highest number of cell divisions and, therefore, the highest number of diverse populations. Although chemostat culture for ALE requires more complicated culture devices, it is less labor intensive once the operation begins. Comparative genomic and transcriptome analyses of the adapted strain provide evolutionary clues as to how the stressors contribute to mutations that overcome the stress. The goal of the current paper is to bring about accelerated evolution of microorganisms under controlled laboratory conditions. PMID:27684991

  12. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  13. The formation of entropy cores in non-radiative galaxy cluster simulations: smoothed particle hydrodynamics versus adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Power, C.; Read, J. I.; Hobbs, A.

    2014-06-01

    We simulate cosmological galaxy cluster formation using three different approaches to solving the equations of non-radiative hydrodynamics - classic smoothed particle hydrodynamics (SPH), novel SPH with a higher order dissipation switch (SPHS), and an adaptive mesh refinement (AMR) method. Comparing spherically averaged entropy profiles, we find that SPHS and AMR approaches result in a well-defined entropy core that converges rapidly with increasing mass and force resolution. In contrast, the central entropy profile in the SPH approach is sensitive to the cluster's assembly history and shows poor numerical convergence. We trace this disagreement to the known artificial surface tension in SPH that appears at phase boundaries. Varying systematically numerical dissipation in SPHS, we study the contributions of numerical and physical dissipation to the entropy core and argue that numerical dissipation is required to ensure single-valued fluid quantities in converging flows. However, provided it occurs only at the resolution limit and does not propagate errors to larger scales, its effect is benign - there is no requirement to build `sub-grid' models of unresolved turbulence for galaxy cluster simulations. We conclude that entropy cores in non-radiative galaxy cluster simulations are physical, resulting from entropy generation in shocked gas during cluster assembly.

  14. Temperature Structure of the Intracluster Medium from Smoothed-particle Hydrodynamics and Adaptive-mesh Refinement Simulations

    NASA Astrophysics Data System (ADS)

    Rasia, Elena; Lau, Erwin T.; Borgani, Stefano; Nagai, Daisuke; Dolag, Klaus; Avestruz, Camille; Granato, Gian Luigi; Mazzotta, Pasquale; Murante, Giuseppe; Nelson, Kaylea; Ragone-Figueroa, Cinthia

    2014-08-01

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R 500 the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  15. Temperature structure of the intracluster medium from smoothed-particle hydrodynamics and adaptive-mesh refinement simulations

    SciTech Connect

    Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea

    2014-08-20

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  16. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  17. Io's Plasma Environment During the Galileo Flyby: Global Three-Dimensional MHD Modeling with Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Combi, M. R.; Kabin, K.; Gombosi, T. I.; DeZeeuw, D. L.; Powell, K. G.

    1998-01-01

    The first results for applying a three-dimensional multimedia ideal MHD model for the mass-loaded flow of Jupiter's corotating magnetospheric plasma past Io are presented. The model is able to consider simultaneously physically realistic conditions for ion mass loading, ion-neutral drag, and intrinsic magnetic field in a full global calculation without imposing artificial dissipation. Io is modeled with an extended neutral atmosphere which loads the corotating plasma torus flow with mass, momentum, and energy. The governing equations are solved using adaptive mesh refinement on an unstructured Cartesian grid using an upwind scheme for AHMED. For the work described in this paper we explored a range of models without an intrinsic magnetic field for Io. We compare our results with particle and field measurements made during the December 7, 1995, flyby of to, as published by the Galileo Orbiter experiment teams. For two extreme cases of lower boundary conditions at Io, our model can quantitatively explain the variation of density along the spacecraft trajectory and can reproduce the general appearance of the variations of magnetic field and ion pressure and temperature. The net fresh ion mass-loading rates are in the range of approximately 300-650 kg/s, and equivalent charge exchange mass-loading rates are in the range approximately 540-1150 kg/s in the vicinity of Io.

  18. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  19. A low-numerical dissipation, patch-based adaptive-mesh-refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2006-09-01

    This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.

  20. [Adaptive procedures for measuring arterial blood flow velocity in retinal vessels using indicator technique].

    PubMed

    Vilser, W; Schack, B; Bareshova, E; Senff, I; Bräuer-Burchardt, C; Münch, K; Strobel, J

    1995-10-01

    There are highly significant differences in the measuring results of arterial blood velocity between the indicator and laser-Doppler techniques (up to 800%). A new measuring procedure for the analysis of indicator dilution curves was developed based on indicator model and experimental results. The use of this new measuring procedure results in reduced mean systematic error between the indicator and laser-Doppler techniques to values around 10%. With the introduction of adaptive measuring arrays for the creation of indicator dilution curves and the application of adaptive algorithms for centering and spectral normalizing of the dilution curves, improved reproducibility can be expected.

  1. Multilevel adaptive solution procedure for material nonlinear problems in visual programming environment

    SciTech Connect

    Kim, D.; Ghanem, R.

    1994-12-31

    Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.

  2. A Procedure for Controlling General Test Overlap in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Chen, Shu-Ying

    2010-01-01

    To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…

  3. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    SciTech Connect

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.

  4. Flowfield-Dependent Variation (FDV) method for compressible, incompressible, viscous, and inviscid flow interactions with FDV adaptive mesh refinements and parallel processing

    NASA Astrophysics Data System (ADS)

    Heard, Gary Wayne

    A new approach to solution-adaptive grid refinement using the finite element method and Flowfield-Dependent Variation (FDV) theory applied to the Navier-Stokes system of equations is discussed. Flowfield-Dependent Variation (FDV) parameters are introduced into a modified Taylor series expansion of the conservation variables, with the Navier-Stokes system of equations substituted into the Taylor series. The FDV parameters are calculated from the current Fowfield conditions, and automatically adjust the resulting equations from elliptic to parabolic to hyperbolic in type to assure solution accuracy in evolving fluid flowfields that may consist of interactions between regions of compressible and incompressible flow, viscous and inviscid flow, and turbulent and laminar flow. The system of equations is solved using an element-by-element iterative GMRES solver with the elements grouped together to allow the element operations to be performed in parallel. The FDV parameters play many roles in the numerical scheme. One of these roles is to control formations of shock wave discontinuities in high speeds and pressure oscillations in low speeds. To demonstrate these abilities, various example problems are shown, including supersonic flows over a flat plate and a compression corner, and flows involving triple shock waves generated on fin geometries for high speed compressible flows. Furthermore, analysis of low speed incompressible flows is presented in the form of flow in a lid-driven cavity at various Reynolds numbers. Another role of the FDV parameters is their use as error indicators for a solution-adaptive mesh. The finite element grid is refined as dictated by the magnitude of the FDV parameters. Examples of adaptive grids generated using the FDV parameters as error indicators are presented for supersonic flow over flat plate/compression ramp combinations in both two and three dimensions. Grids refined using the FDV parameters as error indicators are comparable to ones

  5. Three dimensional hydrodynamic calculations with adaptive mesh refinement of the evolution of Rayleigh Taylor and Richtmyer Meshkov instabilities in converging geometry: Multi-mode perturbations

    SciTech Connect

    Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.

    1993-04-01

    The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.

  6. Solving the ECG forward problem by means of standard h- and h-hierarchical adaptive linear boundary element method: comparison with two refinement schemes.

    PubMed

    Shou, Guofa; Xia, Ling; Jiang, Mingfeng; Wei, Qing; Liu, Feng; Crozier, Stuart

    2009-05-01

    The boundary element method (BEM) is a commonly used numerical approach to solve biomedical electromagnetic volume conductor models such as ECG and EEG problems, in which only the interfaces between various tissue regions need to be modeled. The quality of the boundary element discretization affects the accuracy of the numerical solution, and the construction of high-quality meshes is time-consuming and always problem-dependent. Adaptive BEM (aBEM) has been developed and validated as an effective method to tackle such problems in electromagnetic and mechanical fields, but has not been extensively investigated in the ECG problem. In this paper, the h aBEM, which produces refined meshes through adaptive adjustment of the elements' connection, is investigated for the ECG forward problem. Two different refinement schemes: adding one new node (SH1) and adding three new nodes (SH3), are applied for the h aBEM calculation. In order to save the computational time, the h-hierarchical aBEM is also used through the introduction of the h-hierarchical shape functions for SH3. The algorithms were evaluated with a single-layer homogeneous sphere model with assumed dipole sources and a geometrically realistic heart-torso model. The simulations showed that h aBEM can produce better mesh results and is more accurate and effective than the traditional BEM for the ECG problem. While with the same refinement scheme SH3, the h-hierarchical aBEM can save the computational costs about 9% compared to the implementation of standard h aBEM.

  7. A comparison of the structures of lean and rich axisymmetric laminar Bunsen flames: application of local rectangular refinement solution-adaptive gridding

    NASA Astrophysics Data System (ADS)

    Bennett, Beth Anne V.; Fielding, Joseph; Mauro, Richard J.; Long, Marshall B.; Smooke, Mitchell D.

    1999-12-01

    Axisymmetric laminar methane-air Bunsen flames are computed for two equivalence ratios: lean (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 0.776), in which the traditional Bunsen cone forms above the burner; and rich (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 1.243), in which the premixed Bunsen cone is accompanied by a diffusion flame halo located further downstream. Because the extremely large gradients at premixed flame fronts greatly exceed those in diffusion flames, their resolution requires a more sophisticated adaptive numerical method than those ordinarily applied to diffusion flames. The local rectangular refinement (LRR) solution-adaptive gridding method produces robust unstructured rectangular grids, utilizes multiple-scale finite-difference discretizations, and incorporates Newton's method to solve elliptic partial differential equation systems simultaneously. The LRR method is applied to the vorticity-velocity formulation of the fully elliptic governing equations, in conjunction with detailed chemistry, multicomponent transport and an optically-thin radiation model. The computed lean flame is lifted above the burner, and this liftoff is verified experimentally. For both lean and rich flames, grid spacing greatly influences the Bunsen cone's position, which only stabilizes with adequate refinement. In the rich configuration, the oxygen-free region above the Bunsen cone inhibits the complete decay of CH4, thus indirectly initiating the diffusion flame halo where CO oxidizes to CO2. In general, the results computed by the LRR method agree quite well with those obtained on equivalently refined conventional grids, yet the former require less than half the computational resources.

  8. Paradoxical results of adaptive false discovery rate procedures in neuroimaging studies.

    PubMed

    Reiss, Philip T; Schwartzman, Armin; Lu, Feihan; Huang, Lei; Proal, Erika

    2012-12-01

    Adaptive false discovery rate (FDR) procedures, which offer greater power than the original FDR procedure of Benjamini and Hochberg, are often applied to statistical maps of the brain. When a large proportion of the null hypotheses are false, as in the case of widespread effects such as cortical thinning throughout much of the brain, adaptive FDR methods can surprisingly reject more null hypotheses than not accounting for multiple testing at all-i.e., using uncorrected p-values. A straightforward mathematical argument is presented to explain why this can occur with the q-value method of Storey and colleagues, and a simulation study shows that it can also occur, to a lesser extent, with a two-stage FDR procedure due to Benjamini and colleagues. We demonstrate the phenomenon with reference to a published data set documenting cortical thinning in attention deficit/hyperactivity disorder. The paper concludes with recommendations for how to proceed when adaptive FDR results of this kind are encountered in practice.

  9. View planning and mesh refinement effects on a semi-automatic three-dimensional photorealistic texture mapping procedure

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong; Yang, Yuanfan

    2012-02-01

    A novel three-dimensional (3-D) photorealistic texturing process is presented that applies a view-planning and view-sequencing algorithm to the 3-D coarse model to determine a set of best viewing angles for capturing the individual real-world objects/building's images. The best sequence of views will generate sets of visible edges in each view to serve as a guide for camera field shots by either manual adjustment or equipment alignment. The best view tries to cover as many objects/building surfaces as possible in one shot. This will lead to a smaller total number of shots taken for a complete model reconstruction requiring texturing with photo-realistic effects. The direct linear transformation method (DLT) is used for reprojection of 3-D model vertices onto a two-dimensional (2-D) images plane for actual texture mapping. Given this method, the actual camera orientations do not have to be unique and can be set arbitrarily without heavy and expensive positioning equipment. We also present results of a study on the texture-mapping precision as a function of the level of visible mesh subdivision. In addition, the control points selection for the DLT method used for reprojection of 3-D model vertices onto 2-D textured images is also investigated for its effects on mapping precision. By using DLT and perspective projection theories on a coarse model feature points, this technique will allow accurate 3-D texture mapping of refined model meshes of real-world buildings. The novel integration flow of this research not only greatly reduces the human labor and intensive equipment requirements of traditional methods, but also generates a more appealing photo-realistic appearance of reconstructed models, which is useful in many multimedia applications. The roles of view planning (VP) are multifold. VP can (1) reduce the repetitive texture-mapping computation load, (2) can present a set of visible model wireframe edges that can serve as a guide for images with sharp edges and

  10. An adaptive weighted ensemble procedure for efficient computation of free energies and first passage rates.

    PubMed

    Bhatt, Divesh; Bahar, Ivet

    2012-09-14

    We introduce an adaptive weighted-ensemble procedure (aWEP) for efficient and accurate evaluation of first-passage rates between states for two-state systems. The basic idea that distinguishes aWEP from conventional weighted-ensemble (WE) methodology is the division of the configuration space into smaller regions and equilibration of the trajectories within each region upon adaptive partitioning of the regions themselves into small grids. The equilibrated conditional∕transition probabilities between each pair of regions lead to the determination of populations of the regions and the first-passage times between regions, which in turn are combined to evaluate the first passage times for the forward and backward transitions between the two states. The application of the procedure to a non-trivial coarse-grained model of a 70-residue calcium binding domain of calmodulin is shown to efficiently yield information on the equilibrium probabilities of the two states as well as their first passage times. Notably, the new procedure is significantly more efficient than the canonical implementation of the WE procedure, and this improvement becomes even more significant at low temperatures.

  11. Prism Adaptation and Aftereffect: Specifying the Properties of a Procedural Memory System

    PubMed Central

    Fernández-Ruiz, Juan; Díaz, Rosalinda

    1999-01-01

    Prism adaptation, a form of procedural learning, is a phenomenon in which the motor system adapts to new visuospatial coordinates imposed by prisms that displace the visual field. Once the prisms are withdrawn, the degree and strength of the adaptation can be measured by the spatial deviation of the motor actions in the direction opposite to the visual displacement imposed by the prisms, a phenomenon known as aftereffect. This study was designed to define the variables that affect the acquisition and retention of the aftereffect. Subjects were required to throw balls to a target in front of them before, during, and after lateral displacement of the visual field with prismatic spectacles. The diopters of the prisms and the number of throws were varied among different groups of subjects. The results show that the adaptation process is dependent on the number of interactions between the visual and motor system, and not on the time spent wearing the prisms. The results also show that the magnitude of the aftereffect is highly correlated with the magnitude of the adaptation, regardless of the diopters of the prisms or the number of throws. Finally, the results suggest that persistence of the aftereffect depends on the number of throws after the adaptation is complete. On the basis of these results, we propose that the system underlying this kind of learning stores at least two different parameters, the contents (measured as the magnitude of displacement) and the persistence (measured as the number of throws to return to the baseline) of the learned information. PMID:10355523

  12. Multi-threaded adaptive extrapolation procedure for Feynman loop integrals in the physical region

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Assaf, R.

    2013-08-01

    Feynman loop integrals appear in higher order corrections of interaction cross section calculations in perturbative quantum field theory. The integrals are computationally intensive especially in view of singularities which may occur within the integration domain. For the treatment of threshold and infrared singularities we developed techniques using iterated (repeated) adaptive integration and extrapolation. In this paper we describe a shared memory parallelization and its application to one- and two-loop problems, by multi-threading in the outer integrations of the iterated integral. The implementation is layered over OpenMP and retains the adaptive procedure of the sequential method exactly. We give performance results for loop integrals associated with various types of diagrams including one-loop box, pentagon, two-loop self-energy and two-loop vertex diagrams.

  13. An adaptive procedure for the numerical parameters of a particle simulation

    NASA Astrophysics Data System (ADS)

    Galitzine, Cyril; Boyd, Iain D.

    2015-01-01

    In this article, a computational procedure that automatically determines the optimum time step, cell weight and species weights for steady-state multi-species DSMC (direct simulation Monte Carlo) simulations is presented. The time step is required to satisfy the basic requirements of the DSMC method while the weight and relative weights fields are chosen so as to obtain a user-specified average number of particles in all cells of the domain. The procedure allows the conduct of efficient DSMC simulations with minimal user input and is integrable into existing DSMC codes. The adaptive method is used to simulate a test case consisting of two counterflowing jets at a Knudsen number of 0.015. Large accuracy gains for sampled number densities and velocities over a standard simulation approach for the same number of particles are observed.

  14. An adaptive clinical trials procedure for a sensitive subgroup examined in the multiple sclerosis context.

    PubMed

    Riddell, Corinne A; Zhao, Yinshan; Petkau, John

    2016-08-01

    The biomarker-adaptive threshold design (BATD) allows researchers to simultaneously study the efficacy of treatment in the overall group and to investigate the relationship between a hypothesized predictive biomarker and the treatment effect on the primary outcome. It was originally developed for survival outcomes for Phase III clinical trials where the biomarker of interest is measured on a continuous scale. In this paper, generalizations of the BATD to accommodate count biomarkers and outcomes are developed and then studied in the multiple sclerosis (MS) context where the number of relapses is a commonly used outcome. Through simulation studies, we find that the BATD has increased power compared with a traditional fixed procedure under varying scenarios for which there exists a sensitive patient subgroup. As an illustration, we apply the procedure for two hypothesized markers, baseline enhancing lesion count and disease duration at baseline, using data from a previously completed trial. MS duration appears to be a predictive marker relationship for this dataset, and the procedure indicates that the treatment effect is strongest for patients who have had MS for less than 7.8 years. The procedure holds promise of enhanced statistical power when the treatment effect is greatest in a sensitive patient subgroup.

  15. An adaptive gating approach for x-ray dose reduction during cardiac interventional procedures

    SciTech Connect

    Abdel-Malek, A.; Yassa, F.; Bloomer, J. )

    1994-03-01

    The increasing number of cardiac interventional procedures has resulted in a tremendous increase in the absorbed x-ray dose by radiologists as well as patients. A new method is presented for x-ray dose reduction which utilizes adaptive tube pulse-rate scheduling in pulsed fluoroscopic systems. In the proposed system, pulse-rate scheduling depends on the heart muscle activity phase determined through continuous guided segmentation of the patient's electrocardiogram (ECG). Displaying images generated at the proposed adaptive nonuniform rate is visually unacceptable; therefore, a frame-filling approach is devised to ensure a 30 frame/sec display rate. The authors adopted two approaches for the frame-filling portion of the system depending on the imaging mode used in the procedure. During cine-mode imaging (high x-ray dose), collected image frame-to-frame pixel motion is estimated using a pel-recursive algorithm followed by motion-based pixel interpolation to estimate the frames necessary to increase the rate to 30 frames/sec. The other frame-filling approach is adopted during fluoro-mode imaging (low x-ray dose), characterized by low signal-to-noise ratio images. This approach consists of simply holding the last collected frame for as many frames as necessary to maintain the real-time display rate.

  16. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  17. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  18. A global initiative to refine acute inhalation studies through the use of 'evident toxicity' as an endpoint: Towards adoption of the fixed concentration procedure.

    PubMed

    Sewell, Fiona; Ragan, Ian; Marczylo, Tim; Anderson, Brian; Braun, Anne; Casey, Warren; Dennison, Ngaire; Griffiths, David; Guest, Robert; Holmes, Tom; van Huygevoort, Ton; Indans, Ian; Kenny, Terry; Kojima, Hajime; Lee, Kyuhong; Prieto, Pilar; Smith, Paul; Smedley, Jason; Stokes, William S; Wnorowski, Gary; Horgan, Graham

    2015-12-01

    Acute inhalation studies are conducted in animals as part of chemical hazard identification and characterisation, including for classification and labelling purposes. Current accepted methods use death as an endpoint (OECD TG403 and TG436), whereas the fixed concentration procedure (FCP) (draft OECD TG433) uses fewer animals and replaces lethality as an endpoint with 'evident toxicity.' Evident toxicity is defined as clear signs of toxicity that predict exposure to the next highest concentration will cause severe toxicity or death in most animals. A global initiative including 20 organisations, led by the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) has shared data on the clinical signs recorded during acute inhalation studies for 172 substances (primarily dusts or mists) with the aim of making evident toxicity more objective and transferable between laboratories. Pairs of studies (5 male or 5 female rats) with at least a two-fold change in concentration were analysed to determine if there are any signs at the lower dose that could have predicted severe toxicity or death at the higher concentration. The results show that signs such as body weight loss (>10% pre-dosing weight), irregular respiration, tremors and hypoactivity, seen at least once in at least one animal after the day of dosing are highly predictive (positive predictive value > 90%) of severe toxicity or death at the next highest concentration. The working group has used these data to propose changes to TG433 that incorporate a clear indication of the clinical signs that define evident toxicity.

  19. Individual Differences and Test Administration Procedures: A Comparison of Fixed-Item, Computerized-Adaptive, and Self-Adapted Testing.

    ERIC Educational Resources Information Center

    Vispoel, Walter P.; And Others

    1994-01-01

    Vocabulary fixed-item (FIT), computerized-adaptive (CAT), and self-adapted (SAT) tests were compared with 121 college students. CAT was more precise and efficient than SAT, which was more precise and efficient than FIT. SAT also yielded higher ability estimates for individuals with lower verbal self-concepts. (SLD)

  20. Conformal refinement of unstructured quadrilateral meshes

    SciTech Connect

    Garmella, Rao

    2009-01-01

    We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.

  1. Procedures for Computing Transonic Flows for Control of Adaptive Wind Tunnels. Ph.D. Thesis - Technische Univ., Berlin, Mar. 1986

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer

    1987-01-01

    Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.

  2. An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks

    SciTech Connect

    Coleman, Andre Michael

    2008-06-01

    The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high

  3. Adaptive correction procedure for TVL1 image deblurring under impulse noise

    NASA Astrophysics Data System (ADS)

    Bai, Minru; Zhang, Xiongjun; Shao, Qianqian

    2016-08-01

    For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.

  4. Coupling a local adaptive grid refinement technique with an interface sharpening scheme for the simulation of two-phase flow and free-surface flows using VOF methodology

    NASA Astrophysics Data System (ADS)

    Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis

    2015-11-01

    This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.

  5. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    SciTech Connect

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  6. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  7. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  8. Adaptive kernel independent component analysis and UV spectrometry applied to characterize the procedure for processing prepared rhubarb roots.

    PubMed

    Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an

    2011-11-01

    By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.

  9. Parallel Adaptive Multi-Mechanics Simulations using Diablo

    SciTech Connect

    Parsons, D; Solberg, J

    2004-12-03

    Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)

  10. Hirshfeld atom refinement.

    PubMed

    Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-09-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.

  11. A Minimax Sequential Procedure in the Context of Computerized Adaptive Mastery Testing.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    The purpose of this paper is to derive optimal rules for variable-length mastery tests in case three mastery classification decisions (nonmastery, partial mastery, and mastery) are distinguished. In a variable-length or adaptive mastery test, the decision is to classify a subject as a master, a partial master, a nonmaster, or continuing sampling…

  12. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  13. Multidisciplinary Procedures for Designing Housing Adaptations for People with Mobility Disabilities.

    PubMed

    Sukkay, Sasicha

    2016-01-01

    Based on a 2013 statistic published by Thai with Disability foundation, five percent of Thailand's population are disabled people. Six hundred thousand of them have mobility disability, and the number is increasing every year. To support them, the Thai government has implemented a number of disability laws and policies. One of the policies is to better disabled people's quality of life by adapting their houses to facilitate their activities. However, the policy has not been fully realized yet-there is still no specific guideline for housing adaptation for people with disabilities. This study is an attempt to address the lack of standardized criteria for such adaptation by developing a number of effective ones. Our development had 3 objectives: first, to identify the body functioning of a group of people with mobility disability according to the international classification functioning concept (ICF); second, to perform post-occupancy evaluation of this group and their houses; and third, with the collected data, to have a group of multidisciplinary experts cooperatively develop criteria for housing adaptation. The major findings were that room dimensions and furniture materials really had an impact on accessibility and toilet as well as bed room were the most difficult areas to access. PMID:27534326

  14. Alpha-Stratified Multistage Computerized Adaptive Testing with beta Blocking.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang

    2001-01-01

    Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)

  15. Dissociating proportion congruent and conflict adaptation effects in a Simon-Stroop procedure.

    PubMed

    Torres-Quesada, Maryem; Funes, Maria Jesús; Lupiáñez, Juan

    2013-02-01

    Proportion congruent and conflict adaptation are two well known effects associated with cognitive control. A critical open question is whether they reflect the same or separate cognitive control mechanisms. In this experiment, in a training phase we introduced a proportion congruency manipulation for one conflict type (i.e. Simon), whereas in pre-training and post-training phases two conflict types (e.g. Simon and Spatial Stroop) were displayed with the same incongruent-to-congruent ratio. The results supported the sustained nature of the proportion congruent effect, as it transferred from the training to the post-training phase. Furthermore, this transfer generalized to both conflict types. By contrast, the conflict adaptation effect was specific to conflict type, as it was only observed when the same conflict type (either Simon or Stroop) was presented on two consecutive trials (no effect was observed on conflict type alternation trials). Results are interpreted as supporting the reactive and proactive control mechanisms distinction.

  16. Detecting DIF for Polytomously Scored Items: An Adaptation of the SIBTEST Procedure. Research Report.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; And Others

    Recently, R. Shealy and W. Stout (1993) proposed a procedure for detecting differential item functioning (DIF) called SIBTEST. Current versions of SIBTEST can only be used for dichotomously scored items, but this paper presents an extension to handle polytomous items. The paper presents: (1) a discussion of an appropriate definition of DIF for…

  17. Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Guo, Fanmin

    2008-01-01

    In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…

  18. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  19. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  20. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  1. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  2. Designing experimental setup and procedures for studying alpha-particle-induced adaptive response in zebrafish embryos in vivo

    NASA Astrophysics Data System (ADS)

    Choi, V. W. Y.; Lam, R. K. K.; Chong, E. Y. W.; Cheng, S. H.; Yu, K. N.

    2010-03-01

    The present work was devoted to designing the experimental setup and the associated procedures for alpha-particle-induced adaptive response in zebrafish embryos in vivo. Thin PADC films with a thickness of 16 μm were fabricated and employed as support substrates for holding dechorionated zebrafish embryos for alpha-particle irradiation from the bottom through the films. Embryos were collected within 15 min when the light photoperiod began, which were then incubated and dechorionated at 4 h post fertilization (hpf). They were then irradiated at 5 hpf by alpha particles using a planar 241Am source with an activity of 0.1151 μCi for 24 s (priming dose), and subsequently at 10 hpf using the same source for 240 s (challenging dose). The levels of apoptosis in irradiated zebrafish embryos at 24 hpf were quantified through staining with the vital dye acridine orange, followed by counting the stained cells under a florescent microscope. The results revealed the presence of the adaptive response in zebrafish embryos in vivo, and demonstrated the feasibility of the adopted experimental setup and procedures.

  3. Flexible design of two-stage adaptive procedures for phase III clinical trials.

    PubMed

    Koyama, Tatsuki

    2007-07-01

    The recent popularity of two-stage adaptive designs has fueled a number of proposals for their use in phase III clinical trials. Many of these designs assign certain restrictive functional forms to the design elements of stage 2, such as sample size, critical value and conditional power functions. We propose a more flexible method of design without imposing any particular functional forms on these design elements. Our methodology permits specification of a design based on either conditional or unconditional characteristics, and allows accommodation of sample size limit. Furthermore, we show how to compute the P value, confidence interval and a reasonable point estimate for any design that can be placed under the proposed framework. PMID:17307399

  4. Adaptation of the Unterzaucher procedure for determination of oxygen-18 in organic substances

    SciTech Connect

    Santrock, J.; Hayes, J.M.

    1987-01-01

    A method for the preparation of carbon dioxide from complex organic material for oxygen isotopic analysis is described. A commercial elemental analyzer has been modified so that oxygen contained in the organic material is quantitatively converted to carbon dioxide by the Schuetze-Unterzaucher technique, chromatographically purified, and transferred to a sample container for subsequent analysis by isotope ratio mass spectrometry. The organic sample is pyrolyzed, and the products of pyrolysis are equilibrated with elemental carbon at 1060 /sup 0/C to produce CO, and the CO is oxidized to CO/sub 2/ by I/sub 2/O/sub 5/. The details of these processes are considered, and a quantitative model is developed to allow correction for contamination of the carbon dioxide oxygen pool by an oxygen blank, oxygen from previous samples (memory), an oxygen from iodine pentoxide. Procedures for determination of the parameters used in the mathematical correction and routine application of the model to isotopic analysis are outlined. At natural abundance, the standard deviation for determination of the fractional abundance of oxygen-18 in a sample of organic material is 2 x 10/sup -7/ (equivalent to 0.1%). The detection limit for /sup 18/O as a tracer in biological materials is better than 1 atom excess/10/sup 6/ atoms total O. Analyses of independently established standards show that results obtained by the mathematical correction procedure are accurate and allow determination of abundances of /sup 18/O in the sucrose standards prepared by Hardcastle and Friedman.

  5. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  6. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  7. A formal protocol test procedure for the Survivable Adaptable Fiber Optic Embedded Network (SAFENET)

    NASA Astrophysics Data System (ADS)

    High, Wayne

    1993-03-01

    This thesis focuses upon a new method for verifying the correct operation of a complex, high speed fiber optic communication network. These networks are of growing importance to the military because of their increased connectivity, survivability, and reconfigurability. With the introduction and increased dependence on sophisticated software and protocols, it is essential that their operation be correct. Because of the speed and complexity of fiber optic networks being designed today, they are becoming increasingly difficult to test. Previously, testing was accomplished by application of conformance test methods which had little connection with an implementation's specification. The major goal of conformance testing is to ensure that the implementation of a profile is consistent with its specification. Formal specification is needed to ensure that the implementation performs its intended operations while exhibiting desirable behaviors. The new conformance test method presented is based upon the System of Communicating Machine model which uses a formal protocol specification to generate a test sequence. The major contribution of this thesis is the application of the System of Communicating Machine model to formal profile specifications of the Survivable Adaptable Fiber Optic Embedded Network (SAFENET) standard which results in the derivation of test sequences for a SAFENET profile. The results applying this new method to SAFENET's OSI and Lightweight profiles are presented.

  8. PHYCAA+: an optimized, adaptive procedure for measuring and controlling physiological noise in BOLD fMRI.

    PubMed

    Churchill, Nathan W; Strother, Stephen C

    2013-11-15

    The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models.

  9. Dynamic Load Balancing for Adaptive Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.

  10. Hirshfeld atom refinement

    PubMed Central

    Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-01-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177

  11. Coloured Petri Net Refinement Specification and Correctness Proof with Coq

    NASA Technical Reports Server (NTRS)

    Choppy, Christine; Mayero, Micaela; Petrucci, Laure

    2009-01-01

    In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.

  12. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  13. Robust Refinement as Implemented in TOPAS

    SciTech Connect

    Stone, K.; Stephens, P

    2010-01-01

    A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions

  14. Cerebellar cathodal tDCS interferes with recalibration and spatial realignment during prism adaptation procedure in healthy subjects.

    PubMed

    Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi

    2016-06-01

    The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue. PMID:27031676

  15. Biocatalysis in Oil Refining

    SciTech Connect

    Borole, Abhijeet P; Ramirez-Corredores, M. M.

    2007-01-01

    Biocatalysis in Oil Refining focuses on petroleum refining bioprocesses, establishing a connection between science and technology. The micro organisms and biomolecules examined for biocatalytic purposes for oil refining processes are thoroughly detailed. Terminology used by biologists, chemists and engineers is brought into a common language, aiding the understanding of complex biological-chemical-engineering issues. Problems to be addressed by the future R&D activities and by new technologies are described and summarized in the last chapter.

  16. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  17. A procedure for weighted summation of the derivatives of reflection coefficients in adaptive Schur filter with application to fault detection in rolling element bearings

    NASA Astrophysics Data System (ADS)

    Makowski, Ryszard; Zimroz, Radoslaw

    2013-07-01

    A procedure for feature extraction using adaptive Schur filter for damage detection in rolling element bearings is proposed in the paper. Damaged bearings produce impact signals (shocks) related with local change (loss) of stiffness in pairs: inner/outer race-rolling element. If significant disturbances do not occur (i.e. signal to noise ratio is sufficient), diagnostics is not very complicated and usually envelope analysis is used. Unfortunately, in most industrial examples, these impulsive contributions in vibration are completely masked by noise or other high energy sources. Moreover, impulses may have time varying amplitudes caused by transmission path, load and properties of noise changing in time. Thus, in order to extract time varying signal of interest, the solution would be an adaptive one. The proposed approach is based on the normalized exact least-square time-variant lattice filter (adaptive Schur filter). It is characterized by an extremely fast start-up performance, excellent convergence behavior, and fast parameter tracking capability, making this approach interesting. Schur adaptive filter consists of P sections, estimating, among others, time-varying reflection coefficients (RCs). In this paper it is proposed to use RCs and their derivatives as diagnostic features. However, it is not convenient to analyze simultaneously P signals for P sections, so instead of these, weighted sum of derivatives of RCs can be used. The key question is how to find these weight values for summation procedure. An original contributions are: application of Schur filter to bearings vibration processing, proposal of several features that can be used for detection and mentioned procedure of weighted summation of signal from sections of Schur filter. The method of signal processing is well-adapted for analysis of the non-stationary time-series, so it sounds very promising for diagnostics of machines working in time varying load/speed conditions.

  18. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  19. Three dimensional automatic refinement method for transient small strain elastoplastic finite element computations

    NASA Astrophysics Data System (ADS)

    Biotteau, E.; Gravouil, A.; Lubrecht, A. A.; Combescure, A.

    2012-01-01

    In this paper, the refinement strategy based on the "Non-Linear Localized Full MultiGrid" solver originally published in Int. J. Numer. Meth. Engng 84(8):947-971 (2010) for 2-D structural problems is extended to 3-D simulations. In this context, some extra information concerning the refinement strategy and the behavior of the error indicators are given. The adaptive strategy is dedicated to the accurate modeling of elastoplastic materials with isotropic hardening in transient dynamics. A multigrid solver with local mesh refinement is used to reduce the amount of computational work needed to achieve an accurate calculation at each time step. The locally refined grids are automatically constructed, depending on the user prescribed accuracy. The discretization error is estimated by a dedicated error indicator within the multigrid method. In contrast to other adaptive procedures, where grids are erased when new ones are generated, the previous solutions are used recursively to reduce the computing time on the new mesh. Moreover, the adaptive strategy needs no costly coarsening method as the mesh is reassessed at each time step. The multigrid strategy improves the convergence rate of the non-linear solver while ensuring the information transfer between the different meshes. It accounts for the influence of localized non-linearities on the whole structure. All the steps needed to achieve the adaptive strategy are automatically performed within the solver such that the calculation does not depend on user experience. This paper presents three-dimensional results using the adaptive multigrid strategy on elastoplastic structures in transient dynamics and in a linear geometrical framework. Isoparametric cubic elements with energy and plastic work error indicators are used during the calculation.

  20. Laser furnace technology for zone refining

    NASA Technical Reports Server (NTRS)

    Griner, D. B.

    1984-01-01

    A carbon dioxide laser experiment facility is constructed to investigate the problems in using a laser beam to zone refine semiconductor and metal crystals. The hardware includes a computer to control scan mirrors and stepper motors to provide a variety of melt zone patterns. The equipment and its operating procedures are described.

  1. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    PubMed

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  2. A Comparison Of Two Approaches To Modeling Capture Zones At The Site-Scale: Adaptive Mesh Refinement Within A Basin-Scale Model And Site-Scale/Basin-Scale Model Coupling

    NASA Astrophysics Data System (ADS)

    Keating, E. H.; Vesselinov, V. V.

    2001-12-01

    We are evaluating several alternative approaches to the general problem of simulating site-scale flow and transport using fine grid resolution while maintaining consistency with a regional-scale, coarse-grid flow model. In this paper, we use the example of modeling capture zones for water supply wells on the Pajarito Plateau in Northern New Mexico, using the finite-element heat and mass simulator FEHM. We compare two different models: 1) a basin-scale model (~6400 km2) using adaptive mesh refinement to increase grid resolution in the vicinity of the water supply well fields, and 2) a site-scale model (~560km2) which is coupled to the basin-scale model via specified fluxes along lateral site-scale boundaries. The goals of this study are to estimate capture zones and to determine the robustness of these estimates given uncertainty in the model parameter estimates and fluxes along site-scale boundaries. There are two primary advantages of the site-scale-model approach. It allows us to increase the vertical grid resolution and hence better represent site-scale heterogeneity, and with it we are able to apply on the water table a more spatially-detailed distribution of recharge. The primary disadvantages of this approach are difficulties related to 1) transferring basin-model fluxes to lateral site-scale-model boundaries and 2) parameter estimation within the coupled-model framework. Using the parameter estimation code (PEST), we calibrated the basin model against the head and flux datasets, estimated fluxes to the lateral boundaries of the site-scale model, and determined their uncertainty. We used these predicted fluxes as lateral boundary conditions in the site-scale model calibration runs. Sensitivity analyses demonstrated that predictions of capture zones using either modeling approach are sensitive to permeability values for a few key hydrostratigraphic units. The uncertainty in some of these key parameters was lower for the basin model than for the site

  3. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  4. Structured programming: Principles, notation, procedure

    NASA Technical Reports Server (NTRS)

    JOST

    1978-01-01

    Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.

  5. Convection in grain refining

    NASA Technical Reports Server (NTRS)

    Flemings, M. C.; Szekely, J.

    1982-01-01

    The relationship between fluid flow phenomena, nucleation, and grain refinement in solidifying metals both in the presence and in the absence of a gravitational field was investigated. The reduction of grain size in hard-to-process melts; the effects of undercooling on structure in solidification processes, including rapid solidification processing; and control of this undercooling to improve structures of solidified melts are considered. Grain refining and supercooling thermal modeling of the solidification process, and heat and fluid flow phenomena in the levitated metal droplets are described.

  6. Zone Refining by Laser

    NASA Technical Reports Server (NTRS)

    Griner, D. B.

    1986-01-01

    System developed for studying use of laser beam for zone-refining semiconductors and metals. Specimen scanned with focused CO2 laser beam in such way that thin zone of molten material moves along specimen sweeps impurities with it. Zone-melting system comprises microcomputer, laser, electromechanical and optical components for beam control, vacuum chamber that holds specimen, and sensor for determining specimen temperature.

  7. Choices, Frameworks and Refinement

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter

    1991-01-01

    In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.

  8. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  9. Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure

    PubMed Central

    Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo

    2013-01-01

    Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256

  10. Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure.

    PubMed

    Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo

    2013-01-01

    Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256

  11. Constrained Self-adaptive Solutions Procedures for Structure Subject to High Temperature Elastic-plastic Creep Effects

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1983-01-01

    This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.

  12. Curved Mesh Correction And Adaptation Tool to Improve COMPASS Electromagnetic Analyses

    SciTech Connect

    Luo, X.; Shephard, M.; Lee, L.Q.; Ng, C.; Ge, L.; /SLAC

    2011-11-14

    SLAC performs large-scale simulations for the next-generation accelerator design using higher-order finite elements. This method requires using valid curved meshes and adaptive mesh refinement in complex 3D curved domains to achieve its fast rate of convergence. ITAPS has developed a procedure to address those mesh requirements to enable petascale electromagnetic accelerator simulations by SLAC. The results demonstrate that those correct valid curvilinear meshes can not only make the simulation more reliable but also improve computational efficiency up to 30%. This paper presents a procedure to track moving adaptive mesh refinement in curved domains. The procedure is capable of generating suitable curvilinear meshes to enable large-scale accelerator simulations. The procedure can generate valid curved meshes with substantially fewer elements to improve the computational efficiency and reliability of the COMPASS electromagnetic analyses. Future work will focus on the scalable parallelization of all steps for petascale simulations.

  13. Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure

    NASA Astrophysics Data System (ADS)

    Benfenati, A.; La Camera, A.; Carbillet, M.

    2016-02-01

    Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.

  14. Comment on "A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies" (L. Eça and M. Hoekstra, Journal of Computational Physics 262 (2014) 104-130)

    NASA Astrophysics Data System (ADS)

    Xing, Tao; Stern, Frederick

    2015-11-01

    Eça and Hoekstra [1] proposed a procedure for the estimation of the numerical uncertainty of CFD calculations based on the least squares root (LSR) method. We believe that the LSR method has potential value for providing an extended Richardson-extrapolation solution verification procedure for mixed monotonic and oscillatory or only oscillatory convergent solutions (based on the usual systematic grid-triplet convergence condition R). Current Richardson-extrapolation solution verification procedures [2-7] are restricted to monotonic convergent solutions 0 < R < 1. Procedures for oscillatory convergence simply either use uncertainty estimate based on average maximum minus minimum solutions [8,9] or arbitrarily large factors of safety (FS) [2]. However, in our opinion several issues preclude the usefulness of the presented LSR method: five criticisms follow. The solution verification literature needs technical discussion in order to put the LSR method in context. The LSR method has many options making it very difficult to follow. Fig. 1 provides a block diagram, which summarizes the LSR procedure and options, including some of which we are in disagreement. Compared to the grid-triplet and three-step procedure followed by most solution verification methods (convergence condition followed by error and uncertainty estimates), the LSR method follows a four-grid (minimum) and four-step procedure (error estimate, data range parameter Δϕ, FS, and uncertainty estimate).

  15. Computations of Aerodynamic Performance Databases Using Output-Based Refinement

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.

  16. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  17. Peak procedure performance in young adult and aged rats: acquisition and adaptation to a changing temporal criterion.

    PubMed

    Lejeune, H; Ferrara, A; Soffíe, M; Bronchart, M; Wearden, J H

    1998-08-01

    Twenty-four-month-old and 4-month-old rats were trained on a peak-interval procedure, where the time of reinforcement was varied twice between 20 and 40 sec. Peak times from the old rats were consistently longer than the reinforcement time, whereas those from younger animals tracked the 20- and 40-sec durations more closely. Different measures of performance suggested that the old rats were either (1) systematically misremembering the time of reinforcement or (2) using an internal clock with a substantially greater latency to start and stop timing than the younger animals. Old rats also adjusted more slowly to the first transition from 20 to 40 sec than did the younger ones, but not to later transitions. Correlations between measures derived from within-trial patterns of responding conformed in general to detailed predictions derived from scalar expectancy theory. However, some correlation values more closely resembled those derived from a study of peak-interval performance in humans and a theoretical model developed by Cheng and Westwood (1993), than those obtained in previous work with animals, for reasons that are at present unclear.

  18. Worldwide refining and gas processing directory

    SciTech Connect

    1999-11-01

    Statistics are presented on the following: US refining; Canada refining; Europe refining; Africa refining; Asia refining; Latin American refining; Middle East refining; catalyst manufacturers; consulting firms; engineering and construction; US gas processing; international gas processing; plant maintenance providers; process control and simulation systems; and trade associations.

  19. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  20. Minimally refined biomass fuel

    DOEpatents

    Pearson, Richard K.; Hirschfeld, Tomas B.

    1984-01-01

    A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.

  1. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  2. Refines Efficiency Improvement

    SciTech Connect

    WRI

    2002-05-15

    Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as

  3. Refining Radchem Detectors: Iridium

    NASA Astrophysics Data System (ADS)

    Arnold, C. W.; Bredeweg, T. A.; Vieira, D. J.; Bond, E. M.; Jandel, M.; Rusev, G.; Moody, W. A.; Ullmann, J. L.; Couture, A. J.; Mosby, S.; O'Donnell, J. M.; Haight, R. C.

    2013-10-01

    Accurate determination of neutron fluence is an important diagnostic of nuclear device performance, whether the device is a commercial reactor, a critical assembly or an explosive device. One important method for neutron fluence determination, generally referred to as dosimetry, is based on exploiting various threshold reactions of elements such as iridium. It is possible to infer details about the integrated neutron energy spectrum to which the dosimetry sample or ``radiochemical detector'' was exposed by measuring specific activation products post-irradiation. The ability of radchem detectors like iridium to give accurate neutron fluence measurements is limited by the precision of the cross-sections in the production/destruction network (189Ir-193Ir). The Detector for Advanced Neutron Capture Experiments (DANCE) located at LANSCE is ideal for refining neutron capture cross sections of iridium isotopes. Recent results from a measurement of neutron capture on 193-Ir are promising. Plans to measure other iridium isotopes are underway.

  4. Grain refinement in undercooled metals

    SciTech Connect

    Xiao, J.Z.; Yang, H.; Kui, H.W.

    1998-12-31

    Recently, it was demonstrated that grain refinement in metals can take place through two mechanisms, namely, dynamic nucleation and remelting of initially formed dendrites. In this study, it was found that Ni{sub 99.45}B{sub 0.55} undergoes grain refinement, both by dynamic nucleation or by remelting, depending on the initial bulk undercooling just before crystallization. The nature of the grain refinement process is confirmed by microstructural analysis of the undercooled specimens.

  5. Local Mesh Refinement in the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor

    2000-01-01

    A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.

  6. Dose refinement. ARAC's role

    SciTech Connect

    Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.

    1998-06-01

    The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate

  7. Improved procedures for in vitro skin irritation testing of sticky and greasy natural botanicals.

    PubMed

    Molinari, J; Eskes, C; Andres, E; Remoué, N; Sá-Rocha, V M; Hurtado, S P; Barrichello, C

    2013-02-01

    Skin irritation evaluation is an important endpoint for the safety assessment of cosmetic ingredients required by various regulatory authorities for notification and/or import of test substances. The present study was undertaken to investigate possible protocol adaptations of the currently validated in vitro skin irritation test methods based on reconstructed human epidermis (RhE) for the testing of plant extracts and natural botanicals. Due to their specific physico-chemical properties, such as lipophilicity, sticky/buttery-like texture, waxy/creamy foam characteristics, normal washing procedures can lead to an incomplete removal of these materials and/or to mechanical damage to the tissues, resulting in an impaired prediction of the true skin irritation potential of the materials. For this reason different refined washing procedures were evaluated for their ability to ensure appropriate removal of greasy and sticky substances while not altering the normal responses of the validated RhE test method. Amongst the different procedures evaluated, the use of a SDS 0.1% PBS solution to remove the sticky and greasy test material prior to the normal washing procedures was found to be the most suitable adaptation to ensure efficient removal of greasy and sticky in-house controls without affecting the results of the negative control. The predictive capacity of the refined SDS 0.1% washing procedure, was investigated by using twelve oily and viscous compounds having known skin irritation effects supported by raw and/or peer reviewed in vivo data. The normal washing procedure resulted in 8 out of 10 correctly predicted compounds as compared to 9 out of 10 with the refined washing procedures, showing an increase in the predictive ability of the assay. The refined washing procedure allowed to correctly identify all in vivo skin irritant materials showing the same sensitivity as the normal washing procedures, and further increased the specificity of the assay from 5 to 6 correct

  8. Modern refining and petrochemical equipment

    SciTech Connect

    Pugach, V.V.

    1995-07-01

    Petroleum refining and petroleum chemistry are characterized by a whole set of manufacturing processes and methods, whose application depends on the initial raw material and the final products. Therefore, refining and petrochemical equipment has many different operational principles, design solutions, and materials. The activities of the Russian Petroleum Industry are discussed.

  9. Crystal structure refinement with SHELXL

    SciTech Connect

    Sheldrick, George M.

    2015-01-01

    New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.

  10. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    NASA Astrophysics Data System (ADS)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  11. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  12. Progressive refinement: more than a means to overcome limited bandwidth

    NASA Astrophysics Data System (ADS)

    Rosenbaum, René; Schumann, Heidrun

    2009-01-01

    Progressive refinement is commonly understood as a means to solve problems imposed by limited system resources. In this publication, we apply this technology as a novel approach for information presentation and device adaptation. The progressive refinement is able to handle different kinds of data and consists of innovative ideas to overcome the multiple issues imposed by large data volumes. The key feature is the mature use of multiple incremental previews to the data. This leads to a temporal deskew of the information to be presented and provides a causal flow in terms of a tour-through-the-data. Such a presentation is scalable leading to a significantly simplified adaptation to the available resources, short response times, and reduced visual clutter. Due to its rather beneficial properties and feedback we received from first implementations, we state that there is high potential of progressive refinement far beyond its currently addressed application context.

  13. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509

  14. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    NASA Astrophysics Data System (ADS)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  15. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    PubMed Central

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle. PMID:9449358

  16. The evolution and refinements of varicocele surgery

    PubMed Central

    Marmar, Joel L

    2016-01-01

    Varicoceles had been recognized in clinical practice for over a century. Originally, these procedures were utilized for the management of pain but, since 1952, the repairs had been mostly for the treatment of male infertility. However, the diagnosis and treatment of varicoceles were controversial, because the pathophysiology was not clear, the entry criteria of the studies varied among centers, and there were few randomized clinical trials. Nevertheless, clinicians continued developing techniques for the correction of varicoceles, basic scientists continued investigations on the pathophysiology of varicoceles, and new outcome data from prospective randomized trials have appeared in the world's literature. Therefore, this special edition of the Asian Journal of Andrology was proposed to report much of the new information related to varicoceles and, as a specific part of this project, the present article was developed as a comprehensive review of the evolution and refinements of the corrective procedures. PMID:26732111

  17. Monitoring, Controlling, Refining Communication Processes

    ERIC Educational Resources Information Center

    Spiess, John

    1975-01-01

    Because internal communications are essential to school system success, monitoring, controlling, and refining communicative processes have become essential activities for the chief school administrator. (Available from Buckeye Association of School Administrators, 750 Brooksedge Blvd., Westerville, Ohio 43081) (Author/IRT)

  18. Refining the shifted topological vertex

    SciTech Connect

    Drissi, L. B.; Jehjouh, H.; Saidi, E. H.

    2009-01-15

    We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.

  19. High-resolution numerical simulation and analysis of Mach reflection structures in detonation waves in low-pressure H2 - O2 - Ar mixtures: a summary of results obtained with the adaptive mesh refinement framework AMROC

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multi-dimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the non-equilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, i.e. under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis and a diagram of the transition boundaries between possible Mach reflection structures is constructed.

  20. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  1. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  2. Zone refining of plutonium metal

    SciTech Connect

    Blau, M.S.

    1994-08-01

    The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, {delta}-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal.

  3. Bauxite Mining and Alumina Refining

    PubMed Central

    Frisch, Neale; Olney, David

    2014-01-01

    Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720

  4. Successive refinement lattice vector quantization.

    PubMed

    Mukherjee, Debargha; Mitra, Sanjit K

    2002-01-01

    Lattice Vector quantization (LVQ) solves the complexity problem of LBG based vector quantizers, yielding very general codebooks. However, a single stage LVQ, when applied to high resolution quantization of a vector, may result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive uniform quantization of vectors without having to sacrifice the mean- squared-error advantage of lattice quantization. A successive refinement uniform vector quantization methodology is developed, where the codebooks in successive stages are all lattice codebooks, each in the shape of the Voronoi regions of the lattice at the previous stage. Such Voronoi shaped geometric lattice codebooks are named Voronoi lattice VQs (VLVQ). Measures of efficiency of successive refinement are developed based on the entropy of the indices transmitted by the VLVQs. Additionally, a constructive method for asymptotically optimal uniform quantization is developed using tree-structured subset VLVQs in conjunction with entropy coding. The methodology developed here essentially yields the optimal vector counterpart of scalar "bitplane-wise" refinement. Unfortunately it is not as trivial to implement as in the scalar case. Furthermore, the benefits of asymptotic optimality in tree-structured subset VLVQs remain elusive in practical nonasymptotic situations. Nevertheless, because scalar bitplane- wise refinement is extensively used in modern wavelet image coders, we have applied the VLVQ techniques to successively refine vectors of wavelet coefficients in the vector set-partitioning (VSPIHT) framework. The results are compared against SPIHT and the previous successive approximation wavelet vector quantization (SA-W-VQ) results of Sampson, da Silva and Ghanbari.

  5. Algorithm refinement for stochastic partial differential equations.

    SciTech Connect

    Alexander, F. J.; Garcia, Alejandro L.,; Tartakovsky, D. M.

    2001-01-01

    A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. A variety of numerical experiments were performed for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except within the particle region, far from the interface. Extensions of the methodology to fluid mechanics applications are discussed.

  6. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  7. Diffraction-geometry refinement in the DIALS framework.

    PubMed

    Waterman, David G; Winter, Graeme; Gildea, Richard J; Parkhurst, James M; Brewster, Aaron S; Sauter, Nicholas K; Evans, Gwyndaf

    2016-04-01

    Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135

  8. Diffraction-geometry refinement in the DIALS framework

    PubMed Central

    Waterman, David G.; Winter, Graeme; Gildea, Richard J.; Parkhurst, James M.; Brewster, Aaron S.; Sauter, Nicholas K.; Evans, Gwyndaf

    2016-01-01

    Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135

  9. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  10. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  11. A Title I Refinement: Alaska.

    ERIC Educational Resources Information Center

    Hazelton, Alexander E.; And Others

    Through joint planning with a number of school districts and the Region X Title I Technical Assistance Center, and with the help of a Title I Refinement grant, Alaska has developed a system of data storage and retrieval using microcomputers that assists small school districts in the evaluation and reporting of their Title I programs. Although this…

  12. Multigrid for refined triangle meshes

    SciTech Connect

    Shapira, Yair

    1997-02-01

    A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.

  13. Vacuum Refining of Molten Silicon

    NASA Astrophysics Data System (ADS)

    Safarian, Jafar; Tangstad, Merete

    2012-12-01

    Metallurgical fundamentals for vacuum refining of molten silicon and the behavior of different impurities in this process are studied. A novel mass transfer model for the removal of volatile impurities from silicon in vacuum induction refining is developed. The boundary conditions for vacuum refining system—the equilibrium partial pressures of the dissolved elements and their actual partial pressures under vacuum—are determined through thermodynamic and kinetic approaches. It is indicated that the vacuum removal kinetics of the impurities is different, and it is controlled by one, two, or all the three subsequent reaction mechanisms—mass transfer in a melt boundary layer, chemical evaporation on the melt surface, and mass transfer in the gas phase. Vacuum refining experimental results of this study and literature data are used to study the model validation. The model provides reliable results and shows correlation with the experimental data for many volatile elements. Kinetics of phosphorus removal, which is an important impurity in the production of solar grade silicon, is properly predicted by the model, and it is observed that phosphorus elimination from silicon is significantly increased with increasing process temperature.

  14. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, B.; Heestand, R.L.

    1982-08-31

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  15. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, Bahman; Heestand, Richard L.

    1983-01-01

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  16. Crystal structure refinement from electron diffraction data

    SciTech Connect

    Dudka, A. P. Avilov, A. S.; Lepeshov, G. G.

    2008-05-15

    A procedure of crystal structure refinement from electron diffraction data is described. The electron diffraction data on polycrystalline films are processed taking into account possible overlap of reflections and two-beam interaction. The diffraction from individual single crystals in an electron microscope equipped with a precession attachment is described using the Bloch-wave method, which takes into account multibeam scattering, and a special approach taking into consideration the specific features of the diffraction geometry in the precession technique. Investigations were performed on LiF, NaF, CaF{sub 2}, and Si crystals. A method for reducing experimental data, which allows joint electron and X-ray diffraction study, is proposed.

  17. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...

  18. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations. PMID:26723635

  19. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  20. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  1. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  2. Automated protein model building combined with iterative structure refinement.

    PubMed

    Perrakis, A; Morris, R; Lamzin, V S

    1999-05-01

    In protein crystallography, much time and effort are often required to trace an initial model from an interpretable electron density map and to refine it until it best agrees with the crystallographic data. Here, we present a method to build and refine a protein model automatically and without user intervention, starting from diffraction data extending to resolution higher than 2.3 A and reasonable estimates of crystallographic phases. The method is based on an iterative procedure that describes the electron density map as a set of unconnected atoms and then searches for protein-like patterns. Automatic pattern recognition (model building) combined with refinement, allows a structural model to be obtained reliably within a few CPU hours. We demonstrate the power of the method with examples of a few recently solved structures.

  3. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  4. Bayesian refinement of protein functional site matching

    PubMed Central

    Mardia, Kanti V; Nyirongo, Vysaul B; Green, Peter J; Gold, Nicola D; Westhead, David R

    2007-01-01

    Background Matching functional sites is a key problem for the understanding of protein function and evolution. The commonly used graph theoretic approach, and other related approaches, require adjustment of a matching distance threshold a priori according to the noise in atomic positions. This is difficult to pre-determine when matching sites related by varying evolutionary distances and crystallographic precision. Furthermore, sometimes the graph method is unable to identify alternative but important solutions in the neighbourhood of the distance based solution because of strict distance constraints. We consider the Bayesian approach to improve graph based solutions. In principle this approach applies to other methods with strict distance matching constraints. The Bayesian method can flexibly incorporate all types of prior information on specific binding sites (e.g. amino acid types) in contrast to combinatorial formulations. Results We present a new meta-algorithm for matching protein functional sites (active sites and ligand binding sites) based on an initial graph matching followed by refinement using a Markov chain Monte Carlo (MCMC) procedure. This procedure is an innovative extension to our recent work. The method accounts for the 3-dimensional structure of the site as well as the physico-chemical properties of the constituent amino acids. The MCMC procedure can lead to a significant increase in the number of significant matches compared to the graph method as measured independently by rigorously derived p-values. Conclusion MCMC refinement step is able to significantly improve graph based matches. We apply the method to matching NAD(P)(H) binding sites within single Rossmann fold families, between different families in the same superfamily, and in different folds. Within families sites are often well conserved, but there are examples where significant shape based matches do not retain similar amino acid chemistry, indicating that even within families the

  5. Simulating the quartic Galileon gravity model on adaptively refined meshes

    SciTech Connect

    Li, Baojiu; Barreira, Alexandre; Baugh, Carlton M.; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo; Pascoli, Silvia E-mail: baojiu.li@durham.ac.uk E-mail: wojciech.hellwing@durham.ac.uk E-mail: silvia.pascoli@durham.ac.uk

    2013-11-01

    We develop a numerical algorithm to solve the high-order nonlinear derivative-coupling equation associated with the quartic Galileon model, and implement it in a modified version of the ramses N-body code to study the effect of the Galileon field on the large-scale matter clustering. The algorithm is tested for several matter field configurations with different symmetries, and works very well. This enables us to perform the first simulations for a quartic Galileon model which provides a good fit to the cosmic microwave background (CMB) anisotropy, supernovae and baryonic acoustic oscillations (BAO) data. Our result shows that the Vainshtein mechanism in this model is very efficient in suppressing the spatial variations of the scalar field. However, the time variation of the effective Newtonian constant caused by the curvature coupling of the Galileon field cannot be suppressed by the Vainshtein mechanism. This leads to a significant weakening of the strength of gravity in high-density regions at late times, and therefore a weaker matter clustering on small scales. We also find that without the Vainshtein mechanism the model would have behaved in a completely different way, which shows the crucial role played by nonlinearities in modified gravity theories and the importance of performing self-consistent N-body simulations for these theories.

  6. Adaptive finite-element approach for analysis of bone/prosthesis interaction.

    PubMed

    Hübsch, P F; Middleton, J; Meroi, E A; Natali, A N

    1995-01-01

    The study uses the finite-element method to analyse the stress field in a perfectly bonded hip prosthesis arising from loading through body weight. Special attention is paid to the accuracy of the numerical analysis, and adaptive mesh refinement is introduced to reduce the discretisation error. The finite-element procedure developed is especially well suited to analyse the behaviour of a bonded interface as it is capable of calculating accurately the stress at the nodal positions while satisfying the natural discontinuity in the stress field at this location. An orthotropic material model is used for the representation of the behaviour of the bone, and an axisymmetric geometry with non-symmetrical loading is adopted for the analysis. The results demonstrate the usefulness of adaptive mesh refinement and the significance of adopting anisotropic material modelling in the context of tissue/prosthesis interaction.

  7. Entitlements exemptions for new refiners

    SciTech Connect

    Not Available

    1980-02-29

    The practice of exempting start-up inventories from entitlement requirements for new refiners has been called into question by the Office of Hearings and Appeals and other responsible Departmental officials. ERA with the assistance of the Office of General Counsel considering resolving the matter through rulemaking; however, by October 26, 1979 no rulemaking had been published. Because of the absence of published standards for use in granting these entitlements to new refineries, undue reliance was placed on individual judgements that could result in inequities to applicants and increase the potential for fraud and abuse. Recommendations are given as follows: (1) if the program for granting entitlements exemptions to new refiners is continued, the Administrator, ERA should promptly take action to adopt an appropriate regulation to formalize the program by establishing standards and controls that will assure consistent and equitable application; in addition, files containing adjustments given to new refiners should be made complete to support benefits already allowed; and (2) whether the program is continued or discontinued, the General Counsel and the Administrator, ERA, should coordiate on how to evaluate the propriety of inventory adjustments previously granted to new refineries.

  8. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at

  9. A Refined Cauchy-Schwarz Inequality

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2007-01-01

    The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.

  10. Reformulated Gasoline Market Affected Refiners Differently, 1995

    EIA Publications

    1996-01-01

    This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.

  11. Thermal Adaptation Methods of Urban Plaza Users in Asia’s Hot-Humid Regions: A Taiwan Case Study

    PubMed Central

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-01-01

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis—Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)—were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung’s Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia. PMID:26516881

  12. Thermal Adaptation Methods of Urban Plaza Users in Asia's Hot-Humid Regions: A Taiwan Case Study.

    PubMed

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-10-27

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis--Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)--were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung's Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia.

  13. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  14. Firing of pulverized solvent refined coal

    DOEpatents

    Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.

    1986-01-01

    An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.

  15. Application of local mesh refinement in the DSMC method

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

    2001-08-01

    The implementation of an adaptive mesh embedding (h-refinement) schemes using unstructured grid in two-dimensional Direct Simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new meshes where local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging mode, by introducing anisotropic refinement in the interfacial cells. This is completed by simply connect the hanging node(s) with the other non-hanging node(s) in the non-refined, interfacial cells. In contrast, this remedy increases negligible amount of work; however, it removes all the difficulties presented in the first scheme with hanging nodes. We have tested the proposed scheme for Argon gas using different types of mesh, such as triangular and quadrilateral or mixed, to high-speed driven cavity flow. The results show an improved flow resolution as compared with that of unadaptive mesh. Finally, we have triangular adaptive mesh to compute two near-continuum gas flows, including a supersonic flow over a cylinder and a supersonic flow over a 35° compression ramp. The results show fairly good agreement with previous studies. In summary, the computational penalties by the proposed adaptive schemes are found to be small as compared with the DSMC computation itself. Nevertheless, we have concluded that the proposed scheme is superior to the original unadaptive scheme considering the accuracy of the solution.

  16. Toward a consistent framework for high order mesh refinement schemes in numerical relativity

    NASA Astrophysics Data System (ADS)

    Mongwane, Bishop

    2015-05-01

    It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothly match the phase velocity of coarser grids. We apply the method to test problems involving propagating waves and show a significant reduction in spurious reflections.

  17. Grain Refinement of Deoxidized Copper

    NASA Astrophysics Data System (ADS)

    Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun

    2016-08-01

    This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor (Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.

  18. Grain Refinement of Deoxidized Copper

    NASA Astrophysics Data System (ADS)

    Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun

    2016-10-01

    This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor ( Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.

  19. Risk-based system refinement

    SciTech Connect

    Winter, V.L.; Berg, R.S.; Dalton, L.J.

    1998-06-01

    When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.

  20. Gaseous Refining of Anode Copper

    NASA Astrophysics Data System (ADS)

    Goyal, Pradeep; Themelis, N. J.; Zanchuk, Walter A.

    1982-12-01

    The refining of blister copper prior to casting into anodes consists of oxidizing the copper melt to remove sulfur and then reducing its oxygen content. The age-old "wood poling" technique for deoxidation is gradually being replaced by the injection of reducing gases through one or two tuyeres. Thermodynamic and mass transfer analysis as well as laboratory tests have shown that the operating efficiency of gas injection can be improved considerably by enhancing mixing and gas-liquid mass transfer conditions within the copper bath. The injection of inert gas through porous plugs offers a viable industrial means for effecting such an improvement.

  1. A novel two-stage discrete crack method based on the screened Poisson equation and local mesh refinement

    NASA Astrophysics Data System (ADS)

    Areias, P.; Rabczuk, T.; de Sá, J. César

    2016-09-01

    We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.

  2. Zone refining of plutonium metal

    SciTech Connect

    1997-05-01

    The purpose of this study was to investigate zone refining techniques for the purification of plutonium metal. The redistribution of 10 impurity elements from zone melting was examined. Four tantalum boats were loaded with plutonium impurity alloy, placed in a vacuum furnace, heated to 700{degrees}C, and held at temperature for one hour. Ten passes were made with each boat. Metallographic and chemical analyses performed on the plutonium rods showed that, after 10 passes, moderate movement of certain elements were achieved. Molten zone speeds of 1 or 2 inches per hour had no effect on impurity element movement. Likewise, the application of constant or variable power had no effect on impurity movement. The study implies that development of a zone refining process to purify plutonium is feasible. Development of a process will be hampered by two factors: (1) the effect on impurity element redistribution of the oxide layer formed on the exposed surface of the material is not understood, and (2) the tantalum container material is not inert in the presence of plutonium. Cold boat studies are planned, with higher temperature and vacuum levels, to determine the effect on these factors. 5 refs., 1 tab., 5 figs.

  3. Grain refinement in undercooled nickel

    SciTech Connect

    Leung, K.K.; Chiu, C.P.; Kui, H.W.

    1995-05-15

    In this paper, the microstructures of undercooled Ni that solidified at various initial bulk undercoolings are examined in detail in order to understand the mechanism of grain refinement in metallic systems. Molten Ni contracts on solidification. In the experiment, since it was covered by molten glass flux, upon crystallization cavities had to form to accommodate the rapid volume change if the undercooled specimen remained in contact with the glass flux, which could not flow so readily. The adhesiveness between Ni and glass flux was confirmed by removing them from a furnace after the whole system had been cooled down to room temperature. Furthermore, it was clear from the micrographs that after a cavity had formed, it did not collapse. It can therefore be concluded that smaller grains are found to concentrate near the void along the minor axis. At still higher undercoolings, the effect was so violent that the voids took irregular shapes. Again, the grains near the cavity are somewhat smaller than those further away. Accordingly, the authors conclude that grain refinement in undercooled Ni was brought about by dynamic nucleation as the cavities formed.

  4. Using output to evaluate and refine rules in rule-based expert systems

    NASA Technical Reports Server (NTRS)

    St.clair, D. C.; Bond, W. E.; Flachsbart, B. B.

    1987-01-01

    The techniques described provide an effective tool which knowledge engineers and domain experts can utilize to help in evaluating and refining rules. These techniques have been used successfully as learning mechanisms in a prototype adaptive diagnostic expert system and are applicable to other types of expert systems. The degree to which they constitute complete evaluation/refinement of an expert system depends on the thoroughness of their use.

  5. Adaptive techniques in electrical impedance tomography reconstruction.

    PubMed

    Li, Taoran; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J

    2014-06-01

    We present an adaptive algorithm for solving the inverse problem in electrical impedance tomography. To strike a balance between the accuracy of the reconstructed images and the computational efficiency of the forward and inverse solvers, we propose to combine an adaptive mesh refinement technique with the adaptive Kaczmarz method. The iterative algorithm adaptively generates the optimal current patterns and a locally-refined mesh given the conductivity estimate and solves for the unknown conductivity distribution with the block Kaczmarz update step. Simulation and experimental results with numerical analysis demonstrate the accuracy and the efficiency of the proposed algorithm.

  6. More Refined Experiments with Hemoglobin.

    ERIC Educational Resources Information Center

    Morin, Phillippe

    1985-01-01

    Discusses materials needed, procedures used, and typical results obtained for experiments designed to make a numerical stepwise study of the oxygenation of hemoglobin, myoglobin, and other oxygen carriers. (JN)

  7. Quantifying the Adaptive Cycle.

    PubMed

    Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  8. Quantifying the adaptive cycle

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  9. Quantifying the Adaptive Cycle

    PubMed Central

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  10. Refined phase diagram of boron nitride

    SciTech Connect

    Solozhenko, V.; Turkevich, V.Z.; Holzapfel, W.B.

    1999-04-15

    The equilibrium phase diagram of boron nitride thermodynamically calculated by Solozhenko in 1988 has been now refined on the basis of new experimental data on BN melting and extrapolation of heat capacities of BN polymorphs into high-temperature region using the adapted pseudo-Debye model. As compared with the above diagram, the hBN {l_reversible} cBN equilibrium line is displaced by 60 K toward higher temperatures. The hBN-cBN-L triple point has been calculated to be at 3480 {+-} 10 K and 5.9 {+-} 0.1 GPa, while the hBN-L-V triple point is at T = 3400 {+-} 20 K and p = 400 {+-} 20 Pa, which indicates that the region of thermodynamic stability of vapor in the BN phase diagram is extremely small. It has been found that the slope of the cBN melting curve is positive whereas the slope of hBN melting curve varies from positive between ambient pressure and 3.4 GPa to negative at higher pressures.

  11. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated

  12. Use of intensity quotients and differences in absolute structure refinement.

    PubMed

    Parsons, Simon; Flack, Howard D; Wagner, Trixie

    2013-06-01

    Several methods for absolute structure refinement were tested using single-crystal X-ray diffraction data collected using Cu Kα radiation for 23 crystals with no element heavier than oxygen: conventional refinement using an inversion twin model, estimation using intensity quotients in SHELXL2012, estimation using Bayesian methods in PLATON, estimation using restraints consisting of numerical intensity differences in CRYSTALS and estimation using differences and quotients in TOPAS-Academic where both quantities were coded in terms of other structural parameters and implemented as restraints. The conventional refinement approach yielded accurate values of the Flack parameter, but with standard uncertainties ranging from 0.15 to 0.77. The other methods also yielded accurate values of the Flack parameter, but with much higher precision. Absolute structure was established in all cases, even for a hydrocarbon. The procedures in which restraints are coded explicitly in terms of other structural parameters enable the Flack parameter to correlate with these other parameters, so that it is determined along with those parameters during refinement. PMID:23719469

  13. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  14. Adaptation of the Illness Trajectory Theory to Describe the Work of Transitional Cancer Survivorship

    PubMed Central

    Klimmek, Rachel; Wenzel, Jennifer

    2013-01-01

    Purpose/Objectives Although frameworks for understanding survivorship continue to evolve, most are abstract and do not address the complex context of survivors’ transition following treatment completion. The purpose of this theory adaptation was to examine and refine the Illness Trajectory Theory, which describes the work of managing chronic illness, to address transitional cancer survivorship. Data Sources CINAHL, PubMed, and relevant Institute of Medicine reports were searched for survivors’ experiences during the year following treatment. Data Synthesis Using an abstraction tool, sixty-eight articles were selected from the initial search (N>700). Abstracted data were placed into a priori categories refined according to recommended procedures for theory derivation, followed by expert review. Conclusions Derivation resulted in a framework describing “the work of transitional cancer survivorship” (TCS work). TCS work is defined as survivor tasks, performed alone or with others, to carry out a plan of action for managing one or more aspects of life following primary cancer treatment. Theoretically, survivors engage in 3 reciprocally-interactive lines of work: (1) illness-related; (2) biographical; and (3) everyday life work. Adaptation resulted in refinement of these domains and the addition of survivorship care planning under “illness-related work”. Implications for Nursing Understanding this process of work may allow survivors/co-survivors to better prepare for the post-treatment period. This adaptation provides a framework for future testing and development. Validity and utility of this framework within specific survivor populations should also be explored. PMID:23107863

  15. Dynamics and Adaptive Control for Stability Recovery of Damaged Aircraft

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Krishnakumar, Kalmanje; Kaneshige, John; Nespeca, Pascal

    2006-01-01

    This paper presents a recent study of a damaged generic transport model as part of a NASA research project to investigate adaptive control methods for stability recovery of damaged aircraft operating in off-nominal flight conditions under damage and or failures. Aerodynamic modeling of damage effects is performed using an aerodynamic code to assess changes in the stability and control derivatives of a generic transport aircraft. Certain types of damage such as damage to one of the wings or horizontal stabilizers can cause the aircraft to become asymmetric, thus resulting in a coupling between the longitudinal and lateral motions. Flight dynamics for a general asymmetric aircraft is derived to account for changes in the center of gravity that can compromise the stability of the damaged aircraft. An iterative trim analysis for the translational motion is developed to refine the trim procedure by accounting for the effects of the control surface deflection. A hybrid direct-indirect neural network, adaptive flight control is proposed as an adaptive law for stabilizing the rotational motion of the damaged aircraft. The indirect adaptation is designed to estimate the plant dynamics of the damaged aircraft in conjunction with the direct adaptation that computes the control augmentation. Two approaches are presented 1) an adaptive law derived from the Lyapunov stability theory to ensure that the signals are bounded, and 2) a recursive least-square method for parameter identification. A hardware-in-the-loop simulation is conducted and demonstrates the effectiveness of the direct neural network adaptive flight control in the stability recovery of the damaged aircraft. A preliminary simulation of the hybrid adaptive flight control has been performed and initial data have shown the effectiveness of the proposed hybrid approach. Future work will include further investigations and high-fidelity simulations of the proposed hybrid adaptive Bight control approach.

  16. Adaptation to hot environmental conditions: an exploration of the performance basis, procedures and future directions to optimise opportunities for elite athletes.

    PubMed

    Guy, Joshua H; Deakin, Glen B; Edwards, Andrew M; Miller, Catherine M; Pyne, David B

    2015-03-01

    Extreme environmental conditions present athletes with diverse challenges; however, not all sporting events are limited by thermoregulatory parameters. The purpose of this leading article is to identify specific instances where hot environmental conditions either compromise or augment performance and, where heat acclimation appears justified, evaluate the effectiveness of pre-event acclimation processes. To identify events likely to be receptive to pre-competition heat adaptation protocols, we clustered and quantified the magnitude of difference in performance of elite athletes competing in International Association of Athletics Federations (IAAF) World Championships (1999-2011) in hot environments (>25 °C) with those in cooler temperate conditions (<25 °C). Athletes in endurance events performed worse in hot conditions (~3 % reduction in performance, Cohen's d > 0.8; large impairment), while in contrast, performance in short-duration sprint events was augmented in the heat compared with temperate conditions (~1 % improvement, Cohen's d > 0.8; large performance gain). As endurance events were identified as compromised by the heat, we evaluated common short-term heat acclimation (≤7 days, STHA) and medium-term heat acclimation (8-14 days, MTHA) protocols. This process identified beneficial effects of heat acclimation on performance using both STHA (2.4 ± 3.5 %) and MTHA protocols (10.2 ± 14.0 %). These effects were differentially greater for MTHA, which also demonstrated larger reductions in both endpoint exercise heart rate (STHA: -3.5 ± 1.8 % vs MTHA: -7.0 ± 1.9 %) and endpoint core temperature (STHA: -0.7 ± 0.7 % vs -0.8 ± 0.3 %). It appears that worthwhile acclimation is achievable for endurance athletes via both short-and medium-length protocols but more is gained using MTHA. Conversely, it is also conceivable that heat acclimation may be counterproductive for sprinters. As high-performance athletes are often time-poor, shorter duration protocols may

  17. Silicon refinement by chemical vapor transport

    NASA Technical Reports Server (NTRS)

    Olson, J.

    1984-01-01

    Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.

  18. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  19. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  20. Refining the shallow slip deficit

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois

    2016-03-01

    Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include

  1. Femtosecond infrared intrastromal ablation and backscattering-mode adaptive-optics multiphoton microscopy in chicken corneas

    PubMed Central

    Gualda, Emilio J.; Vázquez de Aldana, Javier R.; Martínez-García, M. Carmen; Moreno, Pablo; Hernández-Toro, Juan; Roso, Luis; Artal, Pablo; Bueno, Juan M.

    2011-01-01

    The performance of femtosecond (fs) laser intrastromal ablation was evaluated with backscattering-mode adaptive-optics multiphoton microscopy in ex vivo chicken corneas. The pulse energy of the fs source used for ablation was set to generate two different ablation patterns within the corneal stroma at a certain depth. Intrastromal patterns were imaged with a custom adaptive-optics multiphoton microscope to determine the accuracy of the procedure and verify the outcomes. This study demonstrates the potential of using fs pulses as surgical and monitoring techniques to systematically investigate intratissue ablation. Further refinement of the experimental system by combining both functions into a single fs laser system would be the basis to establish new techniques capable of monitoring corneal surgery without labeling in real-time. Since the backscattering configuration has also been optimized, future in vivo implementations would also be of interest in clinical environments involving corneal ablation procedures. PMID:22076258

  2. Model Checking Linearizability via Refinement

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Chen, Wei; Liu, Yanhong A.; Sun, Jun

    Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that 1) all executions of concurrent operations be serializable, and 2) the serialized executions be correct with respect to the sequential semantics. This paper describes a new method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. Our method avoids the often difficult task of determining linearization points in implementations, but can also take advantage of linearization points if they are given. The method exploits model checking of finite state systems specified as concurrent processes with shared variables. Partial order reduction is used to effectively reduce the search space. The approach is built into a toolset that supports a rich set of concurrent operators. The tool has been used to automatically check a variety of implementations of concurrent objects, including the first algorithms for the mailbox problem and scalable NonZero indicators. Our system was able to find all known and injected bugs in these implementations.

  3. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  4. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... knowledge. (4) Name, address, phone number, facsimile number and E-mail address (if available) of a... disapproved, the refiner must comply with the standards in § 80.195. (h) If EPA finds that a refiner...

  5. Anomalies in the refinement of isoleucine

    SciTech Connect

    Berntsen, Karen R. M.; Vriend, Gert

    2014-04-01

    The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.

  6. Pneumatic conveying of pulverized solvent refined coal

    DOEpatents

    Lennon, Dennis R.

    1984-11-06

    A method for pneumatically conveying solvent refined coal to a burner under conditions of dilute phase pneumatic flow so as to prevent saltation of the solvent refined coal in the transport line by maintaining the transport fluid velocity above approximately 95 ft/sec.

  7. 27 CFR 21.127 - Shellac (refined).

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Shellac (refined). 21.127....127 Shellac (refined). (a) Arsenic content. Not more than 1.4 parts per million as determined by the... petroleum ether and mix thoroughly. Add approximately 2 liters of water and separate a portion of the...

  8. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  9. Modeling Languages Refine Vehicle Design

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Cincinnati, Ohio s TechnoSoft Inc. is a leading provider of object-oriented modeling and simulation technology used for commercial and defense applications. With funding from Small Business Innovation Research (SBIR) contracts issued by Langley Research Center, the company continued development on its adaptive modeling language, or AML, originally created for the U.S. Air Force. TechnoSoft then created what is now known as its Integrated Design and Engineering Analysis Environment, or IDEA, which can be used to design a variety of vehicles and machinery. IDEA's customers include clients in green industries, such as designers for power plant exhaust filtration systems and wind turbines.

  10. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  11. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  12. North Dakota Refining Capacity Study

    SciTech Connect

    Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca

    2011-01-05

    According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.

  13. Increasingly automated procedure acquisition in dynamic systems

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Kedar, Smadar

    1992-01-01

    Procedures are widely used by operators for controlling complex dynamic systems. Currently, most development of such procedures is done manually, consuming a large amount of paper, time, and manpower in the process. While automated knowledge acquisition is an active field of research, not much attention has been paid to the problem of computer-assisted acquisition and refinement of complex procedures for dynamic systems. The Procedure Acquisition for Reactive Control Assistant (PARC), which is designed to assist users in more systematically and automatically encoding and refining complex procedures. PARC is able to elicit knowledge interactively from the user during operation of the dynamic system. We categorize procedure refinement into two stages: diagnosis - diagnose the failure and choose a repair - and repair - plan and perform the repair. The basic approach taken in PARC is to assist the user in all steps of this process by providing increased levels of assistance with layered tools. We illustrate the operation of PARC in refining procedures for the control of a robot arm.

  14. Shading-based DEM refinement under a comprehensive imaging model

    NASA Astrophysics Data System (ADS)

    Peng, Jianwei; Zhang, Yi; Shan, Jie

    2015-12-01

    This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.

  15. Protein NMR structures refined without NOE data.

    PubMed

    Ryu, Hyojung; Kim, Tae-Rae; Ahn, SeonJoo; Ji, Sunyoung; Lee, Jinhyuk

    2014-01-01

    The refinement of low-quality structures is an important challenge in protein structure prediction. Many studies have been conducted on protein structure refinement; the refinement of structures derived from NMR spectroscopy has been especially intensively studied. In this study, we generated flat-bottom distance potential instead of NOE data because NOE data have ambiguity and uncertainty. The potential was derived from distance information from given structures and prevented structural dislocation during the refinement process. A simulated annealing protocol was used to minimize the potential energy of the structure. The protocol was tested on 134 NMR structures in the Protein Data Bank (PDB) that also have X-ray structures. Among them, 50 structures were used as a training set to find the optimal "width" parameter in the flat-bottom distance potential functions. In the validation set (the other 84 structures), most of the 12 quality assessment scores of the refined structures were significantly improved (total score increased from 1.215 to 2.044). Moreover, the secondary structure similarity of the refined structure was improved over that of the original structure. Finally, we demonstrate that the combination of two energy potentials, statistical torsion angle potential (STAP) and the flat-bottom distance potential, can drive the refinement of NMR structures.

  16. Firing of pulverized solvent refined coal

    DOEpatents

    Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.

    1990-05-15

    A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.

  17. Refining of metallurgical-grade silicon

    NASA Technical Reports Server (NTRS)

    Dietl, J.

    1986-01-01

    A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.

  18. Developing Competency in Payroll Procedures

    ERIC Educational Resources Information Center

    Jackson, Allen L.

    1975-01-01

    The author describes a sequence of units that provides for competency in payroll procedures. The units could be the basis for a four to six week minicourse and are adaptable, so that the student, upon completion, will be able to apply his learning to any payroll procedures system. (Author/AJ)

  19. Adaptive wall technology for minimization of wall interferences in transonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.

  20. Refined Phenotyping of Modic Changes

    PubMed Central

    Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino

    2016-01-01

    . The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491

  1. On-Orbit Model Refinement for Controller Redesign

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    1998-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.

  2. U.S. Refining Capacity Utilization

    EIA Publications

    1995-01-01

    This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.

  3. 1991 worldwide refining and gas processing directory

    SciTech Connect

    Not Available

    1990-01-01

    This book ia an authority for immediate information on the industry. You can use it to find new business, analyze market trends, and to stay in touch with existing contacts while making new ones. The possibilities for business applications are numerous. Arranged by country, all listings in the directory include address, phone, fax and telex numbers, a description of the company's activities, names of key personnel and their titles, corporate headquarters, branch offices and plant sites. This newly revised edition lists more than 2000 companies and nearly 3000 branch offices and plant locations. This east-to-use reference also includes several of the most vital and informative surveys of the industry, including the U.S. Refining Survey, the Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Wax Capacities Report from the National Petroleum Refiner's Association.

  4. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  5. Adaptive sparse grid expansions of the vibrational Hamiltonian.

    PubMed

    Strobusch, D; Scheurer, Ch

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  6. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  7. 75 FR 33330 - Seamless Refined Copper Pipe and Tube From China and Mexico

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-11

    ... Commission's rules, as amended, 67 FR. 68036 (November 8, 2002). Even where electronic filing of a document... Commission's Handbook on Electronic Filing Procedures, 67 FR 68168, 68173 (November 8, 2002). Additional... COMMISSION Seamless Refined Copper Pipe and Tube From China and Mexico AGENCY: International Trade...

  8. Validation of a simplified field-adapted procedure for routine determinations of methyl mercury at trace levels in natural water samples using species-specific isotope dilution mass spectrometry.

    PubMed

    Lambertsson, Lars; Björn, Erik

    2004-12-01

    A field-adapted procedure based on species-specific isotope dilution (SSID) methodology for trace-level determinations of methyl mercury (CH(3)Hg(+)) in mire, fresh and sea water samples was developed, validated and applied in a field study. In the field study, mire water samples were filtered, standardised volumetrically with isotopically enriched CH(3) (200)Hg(+), and frozen on dry ice. The samples were derivatised in the laboratory without further pre-treatment using sodium tetraethyl borate (NaB(C(2)H(5))(4)) and the ethylated methyl mercury was purge-trapped on Tenax columns. The analyte was thermo-desorbed onto a GC-ICP-MS system for analysis. Investigations preceding field application of the method showed that when using SSID, for all tested matrices, identical results were obtained between samples that were freeze-preserved or analysed unpreserved. For DOC-rich samples (mire water) additional experiments showed no difference in CH(3)Hg(+) concentration between samples that were derivatised without pre-treatment or after liquid extraction. Extractions of samples for matrix-analyte separation prior to derivatisation are therefore not necessary. No formation of CH(3)Hg(+) was observed during sample storage and treatment when spiking samples with (198)Hg(2+). Total uncertainty budgets for the field application of the method showed that for analyte concentrations higher than 1.5 pg g(-1) (as Hg) the relative expanded uncertainty (REU) was approximately 5% and dominated by the uncertainty in the isotope standard concentration. Below 0.5 pg g(-1) (as Hg), the REU was >10% and dominated by variations in the field blank. The uncertainty of the method is sufficiently low to accurately determine CH(3)Hg(+) concentrations at trace levels. The detection limit was determined to be 4 fg g(-1) (as Hg) based on replicate analyses of laboratory blanks. The described procedure is reliable, considerably faster and simplified compared to non-SSID methods and thereby very

  9. Numerical investigation of BB-AMR scheme using entropy production as refinement criterion

    NASA Astrophysics Data System (ADS)

    Altazin, Thomas; Ersoy, Mehmet; Golay, Frédéric; Sous, Damien; Yushchenko, Lyudmyla

    2016-03-01

    In this work, a parallel finite volume scheme on unstructured meshes is applied to fluid flow for multidimensional hyperbolic system of conservation laws. It is based on a block-based adaptive mesh refinement strategy which allows quick meshing and easy parallelisation. As a continuation and as an extension of a previous work, the useful numerical density of entropy production is used as mesh refinement criterion combined with a local time-stepping method to preserve the computational time. Then, we numerically investigate its efficiency through several test cases with a confrontation with exact solution or experimental data.

  10. Estimator reduction and convergence of adaptive BEM.

    PubMed

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-06-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.

  11. Estimator reduction and convergence of adaptive BEM

    PubMed Central

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-01-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations. PMID:23482248

  12. Software for Refining or Coarsening Computational Grids

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Woods, Jody

    2003-01-01

    A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.

  13. Software for Refining or Coarsening Computational Grids

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Woods, Jody

    2002-01-01

    A computer program performs calculations for refinement or coarsening of computational grids of the type called 'structured' (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.

  14. Software for Refining or Coarsening Computational Grids

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Woods, Jody

    2002-01-01

    A computer program performs calculations for refinement or coarsening of computational grids of the type called "structured" (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.

  15. Parallel tetrahedral mesh refinement with MOAB.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.

  16. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  17. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...

  18. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...

  19. Adapted Canoeing for the Handicapped.

    ERIC Educational Resources Information Center

    Frith, Greg H.; Warren, L. D.

    1984-01-01

    Safety as well as instructional recommendations are offered for adapting canoeing as a recreationial activity for handicapped students. Major steps of the instructional program feature orientation to the water and canoe, entry and exit techinques, and mobility procedures. (CL)

  20. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  1. Metal decontamination for waste minimization using liquid metal refining technology

    SciTech Connect

    Joyce, E.L. Jr.; Lally, B.; Ozturk, B.; Fruehan, R.J.

    1993-09-01

    The current Department of Energy Mixed Waste Treatment Project flowsheet indicates that no conventional technology, other than surface decontamination, exists for metal processing. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain concentration. This project is in support of the National Mixed Low Level Waste Treatment Program. Because of the high cost of disposal, it is important to develop an effective decontamination and volume reduction method for low-level contaminated metals. It is important to be able to decontaminate complex shapes where surfaces are hidden or inaccessible to surface decontamination processes and destruction of organic contamination. These goals can be achieved by adapting commercial metal refining processes to handle radioactive and organic contaminated metal. The radioactive components are concentrated in the slag, which is subsequently vitrified; hazardous organics are destroyed by the intense heat of the bath. The metal, after having been melted and purified, could be recycled for use within the DOE complex. In this project, we evaluated current state-of-the-art technologies for metal refining, with special reference to the removal of radioactive contaminants and the destruction of hazardous organics. This evaluation was based on literature reports, industrial experience, plant visits, thermodynamic calculations, and engineering aspects of the various processes. The key issues addressed included radioactive partitioning between the metal and slag phases, minimization of secondary wastes, operability of the process subject to widely varying feed chemistry, and the ability to seal the candidate process to prevent the release of hazardous species.

  2. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  3. Prism Adaptation in Schizophrenia

    ERIC Educational Resources Information Center

    Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon

    2006-01-01

    The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…

  4. Adaptive Physical Education.

    ERIC Educational Resources Information Center

    Muller, Robert M.

    GRADES OR AGES: Elementary grades. SUBJECT MATTER: Adaptive physical education. ORGANIZATION AND PHYSICAL APPEARANCE: The aims and objectives of the program and the screening procedure are described. Common postural deviations are identified and a number of congenital and other defects described. Details of the modified program are given. There is…

  5. High Quality Visual Hull Reconstruction by Delaunay Refinement

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Gavrilova, Marina L.

    In this paper, we employ Delaunay triangulation techniques to reconstruct high quality visual hulls. From a set of calibrated images, the algorithm first computes a sparse set of initial points with a dandelion model and builds a Delaunay triangulation restricted to the visual hull surface. It then iteratively refines the triangulation by inserting new sampling points, which are the intersections between the visual hull surface and the Voronoi edges dual to the triangulation's facets, until certain criteria are satisfied. The intersections are computed by cutting line segments with the visual hull, which is then converted to the problem of intersecting a line segment with polygonal contours in 2D. A barrel-grid structure is developed to quickly pick out possibly intersecting contour segments and thus accelerate the process of intersecting in 2D. Our algorithm is robust, fast, fully adaptive, and it produces precise and smooth mesh models composed of well-shaped triangles.

  6. Using supercritical fluids to refine hydrocarbons

    DOEpatents

    Yarbro, Stephen Lee

    2015-06-09

    A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.

  7. Unstructured adaptive mesh computations of rotorcraft high-speed impulsive noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger; Garceau, Michael; Biswas, Rupak

    1993-01-01

    A new method is developed for modeling helicopter high-speed impulsive (HSI) noise. The aerodynamics and acoustics near the rotor blade tip are computed by solving the Euler equations on an unstructured grid. A stationary Kirchhoff surface integral is then used to propagate these acoustic signals to the far field. The near-field Euler solver uses a solution-adaptive grid scheme to improve the resolution of the acoustic signal. Grid points are locally added and/or deleted from the mesh at each adaptive step. An important part of this procedure is the choice of an appropriate error indicator. The error indicator is computed from the flow field solution and determines the regions for mesh coarsening and refinement. Computed results for HSI noise compare favorably with experimental data for three different hovering rotor cases.

  8. An adaptive level set method

    SciTech Connect

    Milne, R.B.

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  9. Refiners boost crude capacity; Petrochemical production up

    SciTech Connect

    Corbett, R.A.

    1988-03-21

    Continuing demand strength in refined products and petrochemical markets caused refiners to boost crude-charging capacity slightly again last year, and petrochemical producers to increase production worldwide. Product demand strength is, in large part, due to stable product prices resulting from a stabilization of crude oil prices. Crude prices strengthened somewhat in 1987. That, coupled with fierce product competition, unfortunately drove refining margins negative in many regions of the U.S. during the last half of 1987. But with continued strong demand for gasoline, and an increased demand for higher octane gasoline, margins could turn positive by 1989 and remain so for a few years. U.S. refiners also had to have facilities in place to meet the final requirements of the U.S. Environmental Protection Agency's lead phase-down rules on Jan. 1, 1988. In petrochemicals, plastics demand dept basic petrochemical plants at good utilization levels worldwide. U.S. production of basics such as ethylene and propylene showed solid increases. Many of the derivatives of the basic petrochemical products also showed good production gains. Increased petrochemical production and high plant utilization rates didn't spur plant construction projects, however. Worldwide petrochemical plant projects declined slightly from 1986 figures.

  10. Refiners respond to strategic driving forces

    SciTech Connect

    Gonzalez, R.G.

    1996-05-01

    Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.

  11. Energy Bandwidth for Petroleum Refining Processes

    SciTech Connect

    none,

    2006-10-01

    The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.

  12. Refining aggregate exposure: example using parabens.

    PubMed

    Cowan-Ellsberry, Christina E; Robison, Steven H

    2009-12-01

    The need to understand and estimate quantitatively the aggregate exposure to ingredients used broadly in a variety of product types continues to grow. Currently aggregate exposure is most commonly estimated by using a very simplistic approach of adding or summing the exposures from all the individual product types in which the chemical is used. However, the more broadly the ingredient is used in related consumer products, the more likely this summation will result in an unrealistic estimate of exposure because individuals in the population vary in their patterns of product use including co-use and non-use. Furthermore the ingredient may not be used in all products of a given type. An approach is described for refining this aggregate exposure using data on (1) co-use and non-use patterns of product use, (2) extent of products in which the ingredient is used and (3) dermal penetration and metabolism. This approach and the relative refinement in the aggregate exposure from incorporating these data is illustrated using methyl, n-propyl, n-butyl and ethyl parabens, the most widely used preservative system in personal care and cosmetic products. When these refining factors were used, the aggregate exposure compared to the simple addition approach was reduced by 51%, 58%, 90% and 92% for methyl, n-propyl, n-butyl and ethyl parabens, respectively. Since biomonitoring integrates all sources and routes of exposure, the estimates using this approach were compared to available paraben biomonitoring data. Comparison to the 95th percentile of these data showed that these refined estimates were still conservative by factors of 2-92. All of our refined estimates of aggregate exposure are less than the ADI of 10mg/kg/day for parabens.

  13. Refining industry trends: Europe and surroundings

    SciTech Connect

    Guariguata, U.G.

    1997-05-01

    The European refining industry, along with its counterparts, is struggling with low profitability due to excess primary and conversion capacity, high operating costs and impending decisions of stringent environmental regulations that will require significant investments with hard to justify returns. This region was also faced in the early 1980s with excess capacity on the order of 4 MMb/d and satisfying the {open_quotes}at that point{close_quotes} demand by operating at very low utilization rates (60%). As was the case in the US, the rebalancing of the capacity led to the closure of some 51 refineries. Since the early 1990s, the increase in demand growth has essentially balanced the capacity threshold and utilization rates are settled around the 90% range. During the last two decades, the major oil companies have reduced their presence in the European refining sector, giving some state oil companies and producing countries the opportunity to gain access to the consumer market through the purchase of refining capacity in various countries-specifically, Kuwait in Italy; Libya and Venezuela in Germany; and Norway in other areas of Scandinavia. Although the market share for this new cast of characters remains small (4%) relative to participation by the majors (35%), their involvement in the European refining business set the foundation whereby US independent refiners relinquished control over assets that could not be operated profitably as part of a previous vertically integrated structure, unless access to the crude was ensured. The passage of time still seems to render this model valid.

  14. Curved mesh generation and mesh refinement using Lagrangian solid mechanics

    SciTech Connect

    Persson, P.-O.; Peraire, J.

    2008-12-31

    We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.

  15. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  16. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    SciTech Connect

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  17. Efficient triangular adaptive meshes for tsunami simulations

    NASA Astrophysics Data System (ADS)

    Behrens, J.

    2012-04-01

    With improving technology and increased sensor density for accurate determination of tsunamogenic earthquake source parameters and consecutively uplift distribution, real-time simulations of even near-field tsunami hazard appears feasible in the near future. In order to support such efforts a new generation of tsunami models is currently under development. These models comprise adaptively refined meshes, in order to save computational resources (in areas of low wave activity) and still represent the inherently multi-scale behavior of a tsunami approaching coastal waters. So far, these methods have been based on oct-tree quadrilateral refinement. The method introduced here is based on binary tree refinement on triangular grids. By utilizing the structure stemming from the refinement strategy, a very efficient method can be achieved, with a triangular mesh, able to accurately represent complex boundaries.

  18. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  19. Dental Procedures.

    PubMed

    Ramponi, Denise R

    2016-01-01

    Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures. PMID:27482994

  20. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., phone number, facsimile number, and e-mail address of a corporate contact person. (d) Approval of a...) beginning with the averaging period beginning July 1, 2012. (f) If EPA finds that a refiner provided...

  1. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., phone number, facsimile number, and e-mail address of a corporate contact person. (d) Approval of a...) beginning with the averaging period beginning July 1, 2012. (f) If EPA finds that a refiner provided...

  2. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  3. Fireplace adapters

    SciTech Connect

    Hunt, R.L.

    1983-12-27

    An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.

  4. Adaptive Assessment for Nonacademic Secondary Reading.

    ERIC Educational Resources Information Center

    Hittleman, Daniel R.

    Adaptive assessment procedures are a means of determining the quality of a reader's performance in a variety of reading situations and on a variety of written materials. Such procedures are consistent with the idea that there are functional competencies which change with the reading task. Adaptive assessment takes into account that a lack of…

  5. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  6. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking

    PubMed Central

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  7. A Selective Refinement Approach for Computing the Distance Functions of Curves

    SciTech Connect

    Laney, D A; Duchaineau, M A; Max, N L

    2000-12-01

    We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.

  8. Adaptive interface for personalizing information seeking.

    PubMed

    Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond

    2004-12-01

    An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.

  9. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  10. Crystallization in lactose refining-a review.

    PubMed

    Wong, Shin Yee; Hartel, Richard W

    2014-03-01

    In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry.

  11. The indirect electrochemical refining of lunar ores

    NASA Technical Reports Server (NTRS)

    Semkow, Krystyna W.; Sammells, Anthony F.

    1987-01-01

    Recent work performed on an electrolytic cell is reported which addresses the implicit limitations in various approaches to refining lunar ores. The cell uses an oxygen vacancy conducting stabilized zirconia solid electrolyte to effect separation between a molten salt catholyte compartment where alkali metals are deposited, and an oxygen-evolving anode of composition La(0.89)Sr(0.1)MnO3. The cell configuration is shown and discussed along with a polarization curve and a steady-state current-voltage curve. In a practical cell, cathodically deposited liquid lithium would be continuously removed from the electrolytic cell and used as a valuable reducing agent for ore refining under lunar conditions. Oxygen would be indirectly electrochemically extracted from lunar ores for breathing purposes.

  12. Improve corrosion control in refining processes

    SciTech Connect

    Kane, R.D.; Cayard, M.S.

    1995-11-01

    New guidelines show how to control corrosion and environmental cracking of process equipment when processing feedstocks containing sulfur and/or naphthenic acids. To be cost competitive refiners must be able to process crudes of opportunity. These feedstocks when processed under high temperatures and pressures and alkaline conditions can cause brittle cracks and blisters in susceptible steel-fabricated equipment. Even with advances in steel metallurgy, wet H{sub 2}S cracking continues to be a problem. New research data shows that process conditions such as temperature, pH and flowrate are key factors in the corrosion process. Before selecting equipment material, operators must understand the corrosion mechanisms present within process conditions. Several case histories investigate the corrosion reactions found when refining naphthenic crudes and operating amine gas-sweetening systems. These examples show how to use process controls, inhibitors and/or metallurgy to control corrosion and environmental cracking, to improve material selection and to extend equipment service life.

  13. Using supercritical fluids to refine hydrocarbons

    DOEpatents

    Yarbro, Stephen Lee

    2014-11-25

    This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.

  14. Rapidly-Exploring Roadmaps: Weighing Exploration vs. Refinement in Optimal Motion Planning.

    PubMed

    Alterovitz, Ron; Patil, Sachin; Derbakova, Anna

    2011-01-01

    Computing globally optimal motion plans requires exploring the configuration space to identify reachable free space regions as well as refining understanding of already explored regions to find better paths. We present the rapidly-exploring roadmap (RRM), a new method for single-query optimal motion planning that allows the user to explicitly consider the trade-off between exploration and refinement. RRM initially explores the configuration space like a rapidly exploring random tree (RRT). Once a path is found, RRM uses a user-specified parameter to weigh whether to explore further or to refine the explored space by adding edges to the current roadmap to find higher quality paths in the explored space. Unlike prior methods, RRM does not focus solely on exploration or refine prematurely. We demonstrate the performance of RRM and the trade-off between exploration and refinement using two examples, a point robot moving in a plane and a concentric tube robot capable of following curved trajectories inside patient anatomy for minimally invasive medical procedures.

  15. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  16. Substance abuse in the refining industry

    SciTech Connect

    Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )

    1989-01-01

    In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.

  17. Adaptive Impedance Analysis of Grooved Surface using the Finite Element Method

    SciTech Connect

    Wang, L; /SLAC

    2007-07-06

    Grooved surface is proposed to reduce the secondary emission yield in a dipole and wiggler magnet of International Linear Collider. An analysis of the impedance of the grooved surface based on adaptive finite element is presented in this paper. The performance of the adaptive algorithms, based on an element-element h refinement technique, is assessed. The features of the refinement indicators, adaptation criteria and error estimation parameters are discussed.

  18. A space–angle DGFEM approach for the Boltzmann radiation transport equation with local angular refinement

    SciTech Connect

    Kópházi, József Lathouwers, Danny

    2015-09-15

    In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.

  19. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  20. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  1. Protectionism and the US refining industry

    SciTech Connect

    Brossard, E.B.

    1985-01-01

    Almost unnoticed in the US press is the entrance of the US in the international market as a major exporter of oil products. The author describes his views on protective tariffs particularly with regard to the US refinery industry. He concludes that the new demands for protectionism by some refiners, if enacted into legislation by Congress, would not only raise the cost to all energy consumers but would also adversely affect US American industry, commencing with US exporting refiners that have recently entered the international products market. There would be retaliation by other countries and massive defaults by countries like Mexico. It is not in the national interest for the US to engage in oil tariffs or quotas that may harm the economies of our friendly trading partners - partners upon whom the US is dependent for one-third of its oil consumption and whom the US will need in time of crisis. Discussed are the US oil industry, OPEC, Venezuela, shutdowns, modernization, exports, imports, spot market, Western European refiners, and internationalization vs protectionism. 19 tabs. (DMC)

  2. Problems persist for French refining sector

    SciTech Connect

    Not Available

    1992-07-27

    This paper reports that France's refiners face a continuing shortfall of middle distillate capacity and a persistent surplus of heavy fuel oil. That's the main conclusion of the official Hydrocarbon Directorate's report on how France's refining sector performed in 1991. Imports up---The directorate noted that although net production of refined products in French refineries rose to 1.534 million b/d in 1991 from 1.48 million b/d in 1990, products imports jumped 9.7% to 602,000 b/d in the period. The glut of heavy fuel oil eased to some extent last year because French nuclear power capacity, heavily dependent on ample water supplies, was crimped by drought. That spawned fuel switching. The most note worthy increase in imports was for motor diesel, climbing to 176,000 b/d from 148,000 b/d in 1990. Tax credits are spurring French consumption of that fuel. For the first time, consumption of motor diesel in 1991 outstripped that of gasoline at 374,000 b/d and 356,000 b/d respectively.

  3. Using Induction to Refine Information Retrieval Strategies

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Pell, Barney; Kedar, Smadar

    1994-01-01

    Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

  4. Improved Crystallographic Structures using Extensive Combinatorial Refinement

    PubMed Central

    Nwachukwu, Jerome C.; Southern, Mark R.; Kiefer, James R.; Afonine, Pavel V.; Adams, Paul D.; Terwilliger, Thomas C.; Nettles, Kendall W.

    2013-01-01

    Summary Identifying errors and alternate conformers, and modeling multiple main-chain conformers in poorly ordered regions are overarching problems in crystallographic structure determination that have limited automation efforts and structure quality. Here, we show that implementation of a full factorial designed set of standard refinement approaches, which we call ExCoR (Extensive Combinatorial Refinement), significantly improves structural models compared to the traditional linear tree approach, in which individual algorithms are tested linearly, and only incorporated if the model improves. ExCoR markedly improved maps and models, and reveals building errors and alternate conformations that were masked by traditional refinement approaches. Surprisingly, an individual algorithm that renders a model worse in isolation could still be necessary to produce the best overall model, suggesting that model distortion allows escape from local minima of optimization target function, here shown to be a hallmark limitation of the traditional approach. ExCoR thus provides a simple approach to improving structure determination. PMID:24076406

  5. Individual Educational Planning System for Adult Students. Model Refinement and Implementation.

    ERIC Educational Resources Information Center

    Manning, J. Dale; And Others

    This report describes a project to adapt and/or develop instruments, techniques, and procedures for establishing an individual educational planning system for adult students at Salt Lake Community High School. Chapter 1 describes procedures to identify and assess existing approaches for developing individual educational plans (IEPs). It critiques…

  6. Investigations in adaptive processing of multispectral data

    NASA Technical Reports Server (NTRS)

    Kriegler, F. J.; Horwitz, H. M.

    1973-01-01

    Adaptive data processing procedures are applied to the problem of classifying objects in a scene scanned by multispectral sensor. These procedures show a performance improvement over standard nonadaptive techniques. Some sources of error in classification are identified and those correctable by adaptive processing are discussed. Experiments in adaptation of signature means by decision-directed methods are described. Some of these methods assume correlation between the trajectories of different signature means; for others this assumption is not made.

  7. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  8. 14. INTERIOR VIEW OF REFINING MILL, SHOWING CONVEYOR BELT IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. INTERIOR VIEW OF REFINING MILL, SHOWING CONVEYOR BELT IN PULVERIZING AND PACKING PLANT, LOOKING NORTH - Clay Spur Bentonite Plant & Camp, Refining Mill, Clay Spur Siding on Burlington Northern Railroad, Osage, Weston County, WY

  9. 8. VIEW OF CRUDE CRUSHING AND DRYING PLANT AT REFINING ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF CRUDE CRUSHING AND DRYING PLANT AT REFINING MILL, LOOKING NORTHEAST - Clay Spur Bentonite Plant & Camp, Refining Mill, Clay Spur Siding on Burlington Northern Railroad, Osage, Weston County, WY

  10. Grain Refinement of Permanent Mold Cast Copper Base Alloys

    SciTech Connect

    M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo

    2005-04-01

    Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.

  11. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  12. Adaptive Computing.

    ERIC Educational Resources Information Center

    Harrell, William

    1999-01-01

    Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)

  13. Contour adaptation.

    PubMed

    Anstis, Stuart

    2013-01-01

    It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.

  14. California refining in balance as Phase 2 deadline draws near

    SciTech Connect

    Adler, K.

    1996-01-01

    The impact of California`s 1996 RFG program on US markets and its implications for refiners worldwide is analyzed. The preparations in the last few months before refiners must produce California Phase 2 RFG are addressed. Subsequent articles will consider the process improvements made by refiners, the early implementation of the program, and what has been learned about refining, gasoline distribution, environmental benefits and consumer acceptance that can be replicated around the world.

  15. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  16. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  17. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  18. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  19. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  20. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....