Science.gov

Sample records for adaptive refinement procedure

  1. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  2. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  3. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  4. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  5. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  6. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  7. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  8. Adaptive refinement tools for tetrahedral unstructured grids

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)

    2011-01-01

    An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.

  9. Adaptive Mesh Refinement for ICF Calculations

    NASA Astrophysics Data System (ADS)

    Fyfe, David

    2005-10-01

    This paper describes our use of the package PARAMESH to create an Adaptive Mesh Refinement (AMR) version of NRL's FASTRAD3D code. PARAMESH was designed to create an MPI-based AMR code from a block structured serial code such as FASTRAD3D. FASTRAD3D is a compressible hydrodynamics code containing the physical effects relevant for the simulation of high-temperature plasmas including inertial confinement fusion (ICF) Rayleigh-Taylor unstable direct drive laser targets. These effects include inverse bremmstrahlung laser energy absorption, classical flux-limited Spitzer thermal conduction, real (table look-up) equation-of-state with either separate or identical electron and ion temperatures, multi-group variable Eddington radiation transport, and multi-group alpha particle transport and thermonuclear burn. Numerically, this physics requires an elliptic solver and a ray tracing approach on the AMR grid, which is the main subject of this paper. A sample ICF calculation will be presented. MacNeice et al., ``PARAMESH: A parallel adaptive mesh refinement community tool,'' Computer Physics Communications, 126 (2000), pp. 330-354.

  10. Current sheets, reconnection and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Marliani, Christiane

    1998-11-01

    Adaptive structured mesh refinement methods have proved to be an appropriate tool for the numerical study of a variety of problems where largely separated length scales are involved, e.g. [R. Grauer, C. Marliani, K. Germaschewski, PRL, 80, 4177, (1998)]. A typical example in plasma physics are the current sheets in magnetohydrodynamic flows. Their dynamics is investigated in the framework of incompressible MHD. We present simulations of the ideal and inviscid dynamics in two and three dimensions. In addition, we show numerical simulations for the resistive case in two dimensions. Specifically, we show simulations for the case of the doubly periodic coalescence instability. At the onset of the reconnection process the kinetic energy rises and drops rapidly and afterwards settles into an oscillatory phase. The timescale of the magnetic reconnection process is not affected by these fast events but consistent with the Sweet-Parker model of stationary reconnection. Taking into account the electron inertia terms in the generalized Ohm's law the electron skin depth is introduced as an additional parameter. The modified equations allow for magnetic reconnection in the collisionless regime. Current density and vorticity concentrate in extremely long and thin sheets. Their dynamics becomes numerically accessible by means of adaptive mesh refinement.

  11. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  12. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  13. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  14. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  15. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  16. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  17. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  18. Revisiting and Refining the Multicultural Assessment Procedure.

    ERIC Educational Resources Information Center

    Ridley, Charles R.; Hill, Carrie L.; Li, Lisa C.

    1998-01-01

    Reacts to critiques of the Multicultural Assessment Procedure (MAP). Discusses the definition of culture, the structure of the MAP, cultural versus idiosyncratic data, counselors' knowledge and characteristics, soliciting client feedback and perceptions, and managed care. Encourages colleagues to apply the MAP to their research, practice, and…

  19. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  20. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  1. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  2. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  3. Adaptive Local Grid Refinement in Computational Fluid Mechanics.

    DTIC Science & Technology

    1987-11-01

    Adaptive mesh refinements in reservoir simulation applications (R.E. Ew- ing), Proceedings Intl. Conference on Accuracy Est. and Adaptive Refine... reservoir simulation (R.E. Ewing and .J.V. 1{oebbe), Innovati’ve Numerical Mlethods in Engineering, (R.P. Shaw, J. Pc- riaux, A. Chaudouet, J. Wu...Universities, Cheyenne, Wyoming, February 21, 1986, O 9. Finite element techniques for reservoir simulation , Fourth International Sym- posium on Numerical

  4. Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries

    SciTech Connect

    Phillip, B.

    2000-07-24

    Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.

  5. Adaptive mesh refinement strategies in isogeometric analysis— A computational comparison

    NASA Astrophysics Data System (ADS)

    Hennig, Paul; Kästner, Markus; Morgenstern, Philipp; Peterseim, Daniel

    2017-04-01

    We explain four variants of an adaptive finite element method with cubic splines and compare their performance in simple elliptic model problems. The methods in comparison are Truncated Hierarchical B-splines with two different refinement strategies, T-splines with the refinement strategy introduced by Scott et al. in 2012, and T-splines with an alternative refinement strategy introduced by some of the authors. In four examples, including singular and non-singular problems of linear elasticity and the Poisson problem, the H1-errors of the discrete solutions, the number of degrees of freedom as well as sparsity patterns and condition numbers of the discretized problem are compared.

  6. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  7. Procedures and computer programs for telescopic mesh refinement using MODFLOW

    USGS Publications Warehouse

    Leake, Stanley A.; Claar, David V.

    1999-01-01

    Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.

  8. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  9. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  10. AMR++: Object-Oriented Parallel Adaptive Mesh Refinement

    SciTech Connect

    Quinlan, D.; Philip, B.

    2000-02-02

    Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.

  11. Automatic adaptive grid refinement for the Euler equations

    NASA Technical Reports Server (NTRS)

    Berger, M. J.; Jameson, A.

    1983-01-01

    A method of adaptive grid refinement for the solution of the steady Euler equations for transonic flow is presented. Algorithm automatically decides where the coarse grid accuracy is insufficient, and creates locally uniform refined grids in these regions. This typically occurs at the leading and trailing edges. The solution is then integrated to steady state using the same integrator (FLO52) in the interior of each grid. The boundary conditions needed on the fine grids are examined and the importance of treating the fine/coarse grid inerface conservatively is discussed. Numerical results are presented.

  12. Error sensitivity to refinement: a criterion for optimal grid adaptation

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo; Giannnetti, Flavio; Citro, Vincenzo

    2016-11-01

    Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.

  13. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  14. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  15. Adaptive mesh refinement for shocks and material interfaces

    SciTech Connect

    Dai, William Wenlong

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  16. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.

  17. Block-structured adaptive mesh refinement - theory, implementation and application

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  18. Cosmological fluid mechanics with adaptively refined large eddy simulations

    NASA Astrophysics Data System (ADS)

    Schmidt, W.; Almgren, A. S.; Braun, H.; Engels, J. F.; Niemeyer, J. C.; Schulz, J.; Mekuria, R. R.; Aspden, A. J.; Bell, J. B.

    2014-06-01

    We investigate turbulence generated by cosmological structure formation by means of large eddy simulations using adaptive mesh refinement. In contrast to the widely used implicit large eddy simulations, which resolve a limited range of length-scales and treat the effect of turbulent velocity fluctuations below the grid scale solely by numerical dissipation, we apply a subgrid-scale model for the numerically unresolved fraction of the turbulence energy. For simulations with adaptive mesh refinement, we utilize a new methodology that allows us to adjust the scale-dependent energy variables in such a way that the sum of resolved and unresolved energies is globally conserved. We test our approach in simulations of randomly forced turbulence, a gravitationally bound cloud in a wind, and the Santa Barbara cluster. To treat inhomogeneous turbulence, we introduce an adaptive Kalman filtering technique that separates turbulent velocity fluctuations on resolved length-scales from the non-turbulent bulk flow. From the magnitude of the fluctuating component and the subgrid-scale turbulence energy, a total turbulent velocity dispersion of several 100 km s-1 is obtained for the Santa Barbara cluster, while the low-density gas outside the accretion shocks is nearly devoid of turbulence. The energy flux through the turbulent cascade and the dissipation rate predicted by the subgrid-scale model correspond to dynamical time-scales around 5 Gyr, independent of numerical resolution.

  19. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  20. CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM

    SciTech Connect

    Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  1. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  2. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  3. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  4. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  5. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  6. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  7. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  8. Production-quality Tools for Adaptive Mesh RefinementVisualization

    SciTech Connect

    Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes

    2007-10-25

    Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.

  9. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  10. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  11. Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement

    SciTech Connect

    Anninos, P; Fragile, P C; Salmonson, J D

    2005-05-06

    A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.

  12. Adaptive remeshing method in 2D based on refinement and coarsening techniques

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.

    2007-04-01

    The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.

  13. Groundwater flow parameter estimation using refinement and coarsening indicators for adaptive downscaling parameterization

    NASA Astrophysics Data System (ADS)

    Hassane, Mamadou Maina F. Z.; Ackerer, P.

    2017-02-01

    In the context of parameter identification by inverse methods, an optimized adaptive downscaling parameterization is described in this work. The adaptive downscaling parameterization consists of (i) defining a parameter mesh for each parameter, independent of the flow model mesh, (ii) optimizing the parameters set related to the parameter mesh, and (iii) if the match between observed and computed heads is not accurate enough, creating a new parameter mesh via refinement (downscaling) and performing a new optimization of the parameters. Refinement and coarsening indicators are defined to optimize the parameter mesh refinement. The robustness of the refinement and coarsening indicators was tested by comparing the results of inversions using refinement without indicators, refinement with only refinement indicators and refinement with coarsening and refinement indicators. These examples showed that the indicators significantly reduce the number of degrees of freedom necessary to solve the inverse problem without a loss of accuracy. They, therefore, limit over-parameterization.

  14. Adaptive Input Reconstruction with Application to Model Refinement, State Estimation, and Adaptive Control

    NASA Astrophysics Data System (ADS)

    D'Amato, Anthony M.

    Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate

  15. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  16. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  17. Parallel adaptive mesh refinement techniques for plasticity problems

    SciTech Connect

    Barry, W.J.; Jones, M.T. |; Plassmann, P.E.

    1997-12-31

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  18. Parallel adaptive mesh refinement techniques for plasticity problems

    NASA Technical Reports Server (NTRS)

    Barry, W. J.; Jones, M. T.; Plassmann, P. E.

    1997-01-01

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  19. Composite-Grid Techniques and Adaptive Mesh Refinement in Computational Fluid Dynamics

    DTIC Science & Technology

    1990-01-01

    the equations govern- ing the flow. The patched adaptive mesh refinement technique, devised at Stanford by Oliger, et al ., copes with these sources of...patched adaptive mesh refinement technique, devised at Stanford by Oliger et al . [OL184], copes with these sources of error efficiently by refining...differential equation, as in the numerical grid generation methods proposed by Thompson et al . [THO85], or simply a list of pairs of points in

  20. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  1. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  2. Refining Diagnostic Procedures for Adults With Symptoms of ADHD.

    PubMed

    Sibley, Margaret H; Coxe, Stefany; Molina, Brooke S G

    2017-04-01

    Attention deficit/hyperactivity disorder (ADHD) is a chronic disorder that afflicts individuals into adulthood. The field continues to refine diagnostic standards for ADHD in adults, complicated by the disorder's heterogeneous presentation, subjective symptoms, and overlap with other disorders. Two key diagnostic questions are from whom to collect diagnostic information and which symptoms should be contained on an adult diagnostic checklist. Using a trifactor model, Martel et al. examine these questions in a sample of adults with and without self-identified ADHD symptoms. In this response, we highlight the importance of their finding that self and informant symptom reports differ in a sample of adults who acknowledge ADHD symptoms. We also review issues that continue to face the field related to model specification, evaluating symptom utility, and sample composition, discussing how these issues influence conclusions that may be drawn from Martel et al. and similar investigations. We conclude that the article makes an important research contribution about the nature of self and informant ADHD symptom reports but emphasize that symptom checklist refinement must occur through a broad lens that considers work from a range of sample types and clinically informative analytic strategies.

  3. Spectral-element adaptive refinement magnetohydrodynamic simulations of the island coalescence instability

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Ng, C. S.; Bhattacharjee, A.

    2006-10-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate the problem of island coalescence instability (ICI) in 2D. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described in [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys., 215, 59-80 (2006)], allowing both statically refined and dynamically refined grids. ICI is a MHD process that can produce strong current sheets and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [cf., Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Thus, it is desirable to use adaptive refinement grids to increase resolution, and to maintain accuracy at the same time. Results are compared with simulations using finite difference method with the same refinement grid, as well as pesudo-spectral simulations using uniform grid.

  4. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Choptuik, M. W.

    An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

  5. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  6. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  7. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  8. 40 CFR 80.133 - Agreed-upon procedures for refiners and importers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Agreed-upon procedures for refiners and importers. 80.133 Section 80.133 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements §...

  9. Refining the Measurement of Axis II: A Q-sort Procedure for Assessing Personality Pathology.

    ERIC Educational Resources Information Center

    Shedler, Jonathan; Westen, Drew

    1998-01-01

    Results from a study involving 153 clinicians who used the new Shedler-Westen Assessment Procedure (a Q-sort approach) and eight patient interviews suggest the usefulness of the SWAP to measure personality disorders and refine categories and criteria according to Axis II of the "Diagnostic and Statistical Manual of Mental Disorders"…

  10. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  11. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  12. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  13. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  14. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    SciTech Connect

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.

  15. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  16. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  17. Adaptive Mesh Refinement With Spectral Accuracy for Magnetohydrodynamics in Two Space Dimensions

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Pouquet, A.; Mininni, P.

    2006-12-01

    We examine the effect of accuracy of high-order adaptive mesh refinement (AMR) in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang vortex made up of a magnetic X-point centered on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] in the fluid context, allowing both statically refined and dynamically refined grids. Comparisons with pseudo-spectral computations are performed. Refinement and coarsening criteria are examined, and several tests are described. We show that low-order truncation--even with a comparable number of global degrees of freedom--fails to correctly model some strong (inf-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.

  18. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  19. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2017-04-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  20. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    NASA Astrophysics Data System (ADS)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  1. Adaptive local grid refinement for the compressible 3-D Euler equations

    NASA Astrophysics Data System (ADS)

    Schoenfeld, Thilo

    A method is presented based on a three-dimensional Euler code, using the explicit finite volume technique and a Runge-Kutta scheme, and applied in an adaptive version for the transonic flow around wings. The method allows embedded subgrids at two levels of refinement. Computations are performed with both various fixed refined grids and in an adaptive version applying a pressure or density gradient sensor. When comparing the results of embedded grid computations with calculations on only a total coarse or fine mesh, it can be stated that the local grid refinement technique is an effective framework to obtain well-resolved solutions with, at the same time, a minimum of grid points.

  2. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  3. Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.

  4. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric...order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing...adaptive mesh refinement, discontinuous Galerkin method, non-conforming mesh, IMEX, compressible Euler equations, atmospheric simulations 1. Introduction

  5. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  6. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  7. COMET-AR User's Manual: COmputational MEchanics Testbed with Adaptive Refinement

    NASA Technical Reports Server (NTRS)

    Moas, E. (Editor)

    1997-01-01

    The COMET-AR User's Manual provides a reference manual for the Computational Structural Mechanics Testbed with Adaptive Refinement (COMET-AR), a software system developed jointly by Lockheed Palo Alto Research Laboratory and NASA Langley Research Center under contract NAS1-18444. The COMET-AR system is an extended version of an earlier finite element based structural analysis system called COMET, also developed by Lockheed and NASA. The primary extensions are the adaptive mesh refinement capabilities and a new "object-like" database interface that makes COMET-AR easier to extend further. This User's Manual provides a detailed description of the user interface to COMET-AR from the viewpoint of a structural analyst.

  8. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  9. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  10. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  11. An angularly refineable phase space finite element method with approximate sweeping procedure

    SciTech Connect

    Kophazi, J.; Lathouwers, D.

    2013-07-01

    An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S{sub n}-like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S{sub n} scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S{sub 2} sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)

  12. Adaptive Distributed Environment for Procedure Training (ADEPT)

    NASA Technical Reports Server (NTRS)

    Domeshek, Eric; Ong, James; Mohammed, John

    2013-01-01

    ADEPT (Adaptive Distributed Environment for Procedure Training) is designed to provide more effective, flexible, and portable training for NASA systems controllers. When creating a training scenario, an exercise author can specify a representative rationale structure using the graphical user interface, annotating the results with instructional texts where needed. The author's structure may distinguish between essential and optional parts of the rationale, and may also include "red herrings" - hypotheses that are essential to consider, until evidence and reasoning allow them to be ruled out. The system is built from pre-existing components, including Stottler Henke's SimVentive? instructional simulation authoring tool and runtime. To that, a capability was added to author and exploit explicit control decision rationale representations. ADEPT uses SimVentive's Scalable Vector Graphics (SVG)- based interactive graphic display capability as the basis of the tool for quickly noting aspects of decision rationale in graph form. The ADEPT prototype is built in Java, and will run on any computer using Windows, MacOS, or Linux. No special peripheral equipment is required. The software enables a style of student/ tutor interaction focused on the reasoning behind systems control behavior that better mimics proven Socratic human tutoring behaviors for highly cognitive skills. It supports fast, easy, and convenient authoring of such tutoring behaviors, allowing specification of detailed scenario-specific, but content-sensitive, high-quality tutor hints and feedback. The system places relatively light data-entry demands on the student to enable its rationale-centered discussions, and provides a support mechanism for fostering coherence in the student/ tutor dialog by including focusing, sequencing, and utterance tuning mechanisms intended to better fit tutor hints and feedback into the ongoing context.

  13. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  14. Least-squares spectral element solution of incompressible Navier-Stokes equations with adaptive refinement

    NASA Astrophysics Data System (ADS)

    Ozcelikkale, Altug; Sert, Cuneyt

    2012-05-01

    Least-squares spectral element solution of steady, two-dimensional, incompressible flows are obtained by approximating velocity, pressure and vorticity variable set on Gauss-Lobatto-Legendre nodes. Constrained Approximation Method is used for h- and p-type nonconforming interfaces of quadrilateral elements. Adaptive solutions are obtained using a posteriori error estimates based on least squares functional and spectral coefficient. Effective use of p-refinement to overcome poor mass conservation drawback of least-squares formulation and successful use of h- and p-refinement together to solve problems with geometric singularities are demonstrated. Capabilities and limitations of the developed code are presented using Kovasznay flow, flow past a circular cylinder in a channel and backward facing step flow.

  15. ADER-WENO finite volume schemes with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.

    2013-09-01

    We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.

  16. Interactive solution-adaptive grid generation procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.

    1992-01-01

    TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.

  17. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

  18. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

  19. Testing Refinement Criteria in Adaptive Discontinuous Galerkin Simulations of Dry Atmospheric Convection

    DTIC Science & Technology

    2011-12-22

    Testing refinement criteria in adaptive Discontinuous Galerkin simulations of dry atmospheric convection Andreas Müllera,∗, Jörn Behrensb, Francis X...mainz.de (Andreas Müller), joern.behrens@zmaw.de (Jörn Behrens), fxgirald@nps.edu ( Francis X. Giraldo), vwirth@uni-mainz.de (Volkmar Wirth) Preprint...formulation and accuracy, Mon. Weather Rev. 120 (1992) 1675–1706. [3] D. P. Bacon , N. N. Ahmad, Z. Boybeyi, T. J. Dunn, M. S. Hall, P. C. S. Lee, R. A

  20. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  1. A new refinement indicator for adaptive parameterization: Application to the estimation of the diffusion coefficient in an elliptic problem

    NASA Astrophysics Data System (ADS)

    Hayek, Mohamed; Ackerer, Philippe; Sonnendrücker, Éric

    2009-02-01

    We propose a new refinement indicator (NRI) for adaptive parameterization to determine the diffusion coefficient in an elliptic equation in two-dimensional space. The diffusion coefficient is assumed to be a piecewise constant space function. The unknowns are both the parameter values and the zonation. Refinement indicators are used to localize parameter discontinuities in order to construct iteratively the zonation (parameterization). The refinement indicator is obtained usually by using the first-order effect on the objective function of removing degrees of freedom for a current set of parameters. In this work, in order to reduce the computation costs, we propose a new refinement indicator based on the second-order effect on the objective function. This new refinement indicator depends on the objective function, and its first and second derivatives with respect to the parameter constraints. Numerical experiments show the high efficiency of the new refinement indicator compared to the standard one.

  2. MGGHAT: Elliptic PDE software with adaptive refinement, multigrid and high order finite elements

    NASA Technical Reports Server (NTRS)

    Mitchell, William F.

    1993-01-01

    MGGHAT (MultiGrid Galerkin Hierarchical Adaptive Triangles) is a program for the solution of linear second order elliptic partial differential equations in two dimensional polygonal domains. This program is now available for public use. It is a finite element method with linear, quadratic or cubic elements over triangles. The adaptive refinement via newest vertex bisection and the multigrid iteration are both based on a hierarchical basis formulation. Visualization is available at run time through an X Window display, and a posteriori through output files that can be used as GNUPLOT input. In this paper, we describe the methods used by MGGHAT, define the problem domain for which it is appropriate, illustrate use of the program, show numerical and graphical examples, and explain how to obtain the software.

  3. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  4. Parallel computation of three-dimensional flows using overlapping grids with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Henshaw, William D.; Schwendeman, Donald W.

    2008-08-01

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the accuracy of the numerical solutions is assessed quantitatively through an estimation of the errors from a grid convergence study. The parallel performance of the

  5. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  6. Refinements in husbandry, care and common procedures for non-human primates: Ninth report of the BVAAWF/FRAME/RSPCA/UFAW Joint Working Group on Refinement.

    PubMed

    Jennings, M; Prescott, M J; Buchanan-Smith, Hannah M; Gamble, Malcolm R; Gore, Mauvis; Hawkins, Penny; Hubrecht, Robert; Hudson, Shirley; Jennings, Maggy; Keeley, Joanne R; Morris, Keith; Morton, David B; Owen, Steve; Pearce, Peter C; Prescott, Mark J; Robb, David; Rumble, Rob J; Wolfensohn, Sarah; Buist, David

    2009-04-01

    Preface Whenever animals are used in research, minimizing pain and distress and promoting good welfare should be as important an objective as achieving the experimental results. This is important for humanitarian reasons, for good science, for economic reasons and in order to satisfy the broad legal principles in international legislation. It is possible to refine both husbandry and procedures to minimize suffering and improve welfare in a number of ways, and this can be greatly facilitated by ensuring that up-to-date information is readily available. The need to provide such information led the British Veterinary Association Animal Welfare Foundation (BVAAWF), the Fund for the Replacement of Animals in Medical Experiments (FRAME), the Royal Society for the Prevention of Cruelty to Animals (RSPCA) and the Universities Federation for Animal Welfare (UFAW) to establish a Joint Working Group on Refinement (JWGR) in the UK. The chair is Professor David Morton and the secretariat is provided by the RSPCA. This report is the ninth in the JWGR series. The RSPCA is opposed to the use of animals in experiments that cause pain, suffering, distress or lasting harm and together with FRAME has particular concerns about the continued use of non-human primates. The replacement of primate experiments is a primary goal for the RSPCA and FRAME. However, both organizations share with others in the Working Group, the common aim of replacing primate experiments wherever possible, reducing suffering and improving welfare while primate use continues. The reports of the refinement workshops are intended to help achieve these aims. This report produced by the British Veterinary Association Animal Welfare Foundation (BVAAWF)/Fund for the Replacement of Animals in Medical Experiments (FRAME)/Royal Society for the Prevention of Cruelty to Animals (RSPCA)/Universities Federation for Animal Welfare (UFAW) Joint Working Group on Refinement (JWGR) sets out practical guidance on refining the

  7. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  8. Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results

    NASA Technical Reports Server (NTRS)

    Gombosi, Tamas I.; Powell, Kenneth G.; De Zeeuw, Darren L.

    1994-01-01

    The first results of an axisymmetric magnetohydrodynamic (MHD) model of the interaction of an expanding cometary atmosphere with the solar wind are presented. The model assumes that far upstream the plasma flow lines are parallel to the magnetic field vector. The effects of mass loading and ion-neutral friction are taken into account by the governing equations, whcih are solved on an adaptively refined unstructured grid using a Monotone Upstream Centered Schemes for Conservative Laws (MUSCL)-type numerical technique. The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the near nucleus of the comet. The main findingsare the following: (1) A shock is formed approximately = 0.45 Mkm upstream of the comet (its location is controlled by the sonic and Alfvenic Mach numbers of the ambient solar wind flow and by the cometary mass addition rate). (2) A contact surface is formed approximately = 5,600 km upstream of the nucleus separating an outward expanding cometary ionosphere from the nearly stagnating solar wind flow. The location of the contact surface is controlled by the upstream flow conditions, the mass loading rate and the ion-neutral drag. The contact surface is also the boundary of the diamagnetic cavity. (3) A closed inner shock terminates the supersonic expansion of the cometary ionosphere. This inner shock is closer to the nucleus on dayside than on the nightside.

  9. AMRSim: an object-oriented performance simulator for parallel adaptive mesh refinement

    SciTech Connect

    Miller, B; Philip, B; Quinlan, D; Wissink, A

    2001-01-08

    Adaptive mesh refinement is complicated by both the algorithms and the dynamic nature of the computations. In parallel the complexity of getting good performance is dependent upon the architecture and the application. Most attempts to address the complexity of AMR have lead to the development of library solutions, most have developed object-oriented libraries or frameworks. All attempts to date have made numerous and sometimes conflicting assumptions which make the evaluation of performance of AMR across different applications and architectures difficult or impracticable. The evaluation of different approaches can alternatively be accomplished through simulation of the different AMR processes. In this paper we outline our research work to simulate the processing of adaptive mesh refinement grids using a distributed array class library (P++). This paper presents a combined analytic and empirical approach, since details of the algorithms can be readily predicted (separated into specific phases), while the performance associated with the dynamic behavior must be studied empirically. The result, AMRSim, provides a simple way to develop bounds on the expected performance of AMR calculations subject to constraints given by the algorithms, frameworks, and architecture.

  10. Parallel grid library with adaptive mesh refinement for development of highly scalable simulations

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2012-04-01

    As the single CPU core performance is saturating while the number of cores in the fastest supercomputers increases exponentially, the parallel performance of simulations on distributed memory machines is crucial. At the same time, utilizing efficiently the large number of available cores presents a challenge, especially in simulations with run-time adaptive mesh refinement. We have developed a generic grid library (dccrg) aimed at finite volume simulations that is easy to use and scales well up to tens of thousands of cores. The grid has several attractive features: It 1) allows an arbitrary C++ class or structure to be used as cell data; 2) provides a simple interface for adaptive mesh refinement during a simulation; 3) encapsulates the details of MPI communication when updating the data of neighboring cells between processes; and 4) provides a simple interface to run-time load balancing, e.g. domain decomposition, through the Zoltan library. Dccrg is freely available for anyone to use, study and modify under the GNU Lesser General Public License v3. We will present the implementation of dccrg, simple and advanced usage examples and scalability results on various supercomputers and problems.

  11. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  12. Standard and goal-oriented adaptive mesh refinement applied to radiation transport on 2D unstructured triangular meshes

    SciTech Connect

    Wang Yaqi; Ragusa, Jean C.

    2011-02-01

    Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.

  13. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  14. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  15. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  16. Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow

    SciTech Connect

    Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2011-06-22

    In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.

  17. Damping of spurious numerical reflections off of coarse-fine adaptive mesh refinement grid boundaries

    NASA Astrophysics Data System (ADS)

    Chilton, Sven; Colella, Phillip

    2010-11-01

    Adaptive mesh refinement (AMR) is an efficient technique for solving systems of partial differential equations numerically. The underlying algorithm determines where and when a base spatial and temporal grid must be resolved further in order to achieve the desired precision and accuracy in the numerical solution. However, propagating wave solutions prove problematic for AMR. In systems with low degrees of dissipation (e.g. the Maxwell-Vlasov system) a wave traveling from a finely resolved region into a coarsely resolved region encounters a numerical impedance mismatch, resulting in spurious reflections off of the coarse-fine grid boundary. These reflected waves then become trapped inside the fine region. Here, we present a scheme for damping these spurious reflections. We demonstrate its application to the scalar wave equation and an implementation for Maxwell's Equations. We also discuss a possible extension to the Maxwell-Vlasov system.

  18. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, K.T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  19. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    SciTech Connect

    R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin

    2003-10-20

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.

  20. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  1. Automatic procedure for generating symmetry adapted wavefunctions.

    PubMed

    Johansson, Marcus; Veryazov, Valera

    2017-01-01

    Automatic detection of point groups as well as symmetrisation of molecular geometry and wavefunctions are useful tools in computational quantum chemistry. Algorithms for developing these tools as well as an implementation are presented. The symmetry detection algorithm is a clustering algorithm for symmetry invariant properties, combined with logical deduction of possible symmetry elements using the geometry of sets of symmetrically equivalent atoms. An algorithm for determining the symmetry adapted linear combinations (SALCs) of atomic orbitals is also presented. The SALCs are constructed with the use of projection operators for the irreducible representations, as well as subgroups for determining splitting fields for a canonical basis. The character tables for the point groups are auto generated, and the algorithm is described. Symmetrisation of molecules use a projection into the totally symmetric space, whereas for wavefunctions projection as well and partner function determination and averaging is used. The software has been released as a stand-alone, open source library under the MIT license and integrated into both computational and molecular modelling software.Graphical abstract.

  2. A simple procedure to evaluate the efficiency of bio-macromolecular rigid-body refinement by small-angle scattering.

    PubMed

    Gabel, Frank

    2012-01-01

    A simple and rapid procedure is presented that enables evaluation and visualization of refinement efficiency for bio-macromolecular complexes consisting of two subunits in a given orientation by using small-angle scattering. Subunit orientations within a complex can be provided in practice by NMR residual dipolar couplings, an approach that has been combined with increasing success to complement small-angle data. The procedure is illustrated by applying it to several systems composed of two simple geometric bodies (ellipsoids) and to protein complexes from the protein data bank that vary in subunit size and anisometry. The effects of the experimental small-angle scattering range (Q-range) and data noise level on the refinement efficiency are investigated and discussed. The procedure can be used in two ways: (1) either as a quick preliminary test to probe the refinement capacity expected for a given bio-macromolecular complex prior to sophisticated and time-consuming experiments and data analysis, or (2) as an a posteriori check of the stability and accuracy of a refined model and for illustration of the residual degrees of freedom of the subunit positions that are in agreement with both small-angle data and restraints on subunit orientation (as provided, e.g., by NMR).

  3. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  4. Cultural adaptation and health literacy refinement of a brief depression intervention for Latinos in a low-resource setting.

    PubMed

    Ramos, Zorangelí; Alegría, Margarita

    2014-04-01

    Few studies addressing the mental health needs of Latinos describe how interventions are tailored or culturally adapted to address the needs of their target population. Without reference to this process, efforts to replicate results and provide working models of the adaptation process for other researchers are thwarted. The purpose of this article is to describe the process of a cultural adaptation that included accommodations for health literacy of a brief telephone cognitive-behavioral depression intervention for Latinos in low-resource settings. We followed a five-stage approach (i.e., information gathering, preliminary adaptation, preliminary testing, adaptation, and refinement) as described by Barrera, Castro, Strycker, and Toobert (2013) to structure our process. Cultural adaptations included condensation of the sessions, review, and modifications of materials presented to participants including the addition of visual aids, culturally relevant metaphors, values, and proverbs. Feedback from key stakeholders, including clinician and study participants, was fundamental to the adaptation process. Areas for further inquiry and adaptation identified in our process include revisions to the presentation of "cognitive restructuring" to participants and the inclusion of participant beliefs about the cause of their depression. Cultural adaptation is a dynamic process, requiring numerous refinements to ensure that an intervention is tailored and relevant to the target population.

  5. Development of a scalable gas-dynamics solver with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Korkut, Burak

    There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be

  6. An Adaptively-Refined, Cartesian, Cell-Based Scheme for the Euler and Navier-Stokes Equations. Ph.D. Thesis - Michigan Univ.

    NASA Technical Reports Server (NTRS)

    Coirier, William John

    1994-01-01

    A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a

  7. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  8. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    SciTech Connect

    B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  9. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  10. Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    González, M.; Vaytet, N.; Commerçon, B.; Masson, J.

    2015-06-01

    Context. Radiative transfer plays a crucial role in the star formation process. Because of the high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multifrequency radiation-hydrodynamics models have started to be developed in an attempt to better account for the large variations in opacities as a function of frequency. Aims: We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Methods: Because of the prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilized bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. Results: We present a series of tests that demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a 3D proto-stellar collapse using 20 frequency groups. Differences between grey and multigroup results are briefly discussed, and the large amount of information this new method brings us is also illustrated. Conclusions: We have implemented a multigroup flux-limited diffusion algorithm in the RAMSES code. The method performed well against standard radiation-hydrodynamics tests, and was also shown to be ripe for exploitation in the computational star formation context.

  11. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  12. MASS AND MAGNETIC DISTRIBUTIONS IN SELF-GRAVITATING SUPER-ALFVENIC TURBULENCE WITH ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao

    2011-04-10

    In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.

  13. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

    SciTech Connect

    Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

    2011-03-15

    The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

  14. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  15. Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics

    SciTech Connect

    Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park

    2007-04-02

    Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.

  16. Adaptive Mesh Refinement for a High-Symmetry Singular Euler Flow

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Grauer, R.

    2002-11-01

    Starting from a highly symmetric initial condition motivated by the work of Kida [J. Phys. Soc Jpn. 54, 2132 (1995)] and Boratav and Pelz [Phys. Fluids 6, 2757 (1994)], we use the technique of block-structured adaptive mesh refinement (AMR) to numerically investigate the development of a self-similar singular solution to the incompressible Euler equations. The scheme, previously used by Grauer et al [Phys. Rev. Lett. 84, 4850 (1998)], is particularly well suited to follow the development of singular structures as it allows for effective resolutions far beyond those accessible using fixed grid algorithms. A self-similar collapse is observed in the simulation, where the maximum vorticity blows up as 1/(t_crit-t). Ng and Bhattacharjee [Phys Rev E 54, 1530 (1996)] have presented a sufficient condition for a finite-time singularity in this highly symmetric flow involving the fourth-order spatial derivative of the pressure at and near the origin. We test this sufficient condition and investigate the evolution of the spatial range over which this condition holds in our numerical results. We also demonstrate numerically that this singularity is unstable because in a full simulation that does not build in the symmetries of the initial condition, small perturbations introduced by AMR lead to nonsymmetric evolution of the vortices.

  17. 3D Boltzmann Simulation of the Io's Plasma Environment with Adaptive Mesh and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Lipatov, A. S.; Combi, M. R.

    2002-12-01

    The global dynamics of the ionized and neutral components in the environment of Io plays an important role in the interaction of Jupiter's corotating magnetospheric plasma with Io [Combi et al., 2002; 1998; Kabin et al., 2001]. The stationary simulation of this problem was done in the MHD [Combi et al., 1998; Linker et al, 1998; Kabin et al., 2001] and the electrodynamic [Saur et al., 1999] approaches. In this report, we develop a method of kinetic ion-neutral simulation, which is based on a multiscale adaptive mesh, particle and algorithm refinement. This method employs the fluid description for electrons whereas for ions the drift-kinetic and particle approaches are used. This method takes into account charge-exchange and photoionization processes. The first results of such simulation of the dynamics of ions in the Io's environment are discussed in this report. ~ M R Combi et al., J. Geophys. Res., 103, 9071, 1998. M R Combi, T I Gombosi, K Kabin, Atmospheres in the Solar System: Comparative\\ Aeronomy. Geophys. Monograph Series, 130, 151, 2002. K Kabin et al., Planetary and Space Sci., 49, 337, 2001. J A Linker et al., J. Geophys. Res., 103(E9), 19867, 1998. J Saur et al., J. Geophys. Res., 104, 25105, 1999.

  18. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique.

    PubMed

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  19. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  20. A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids

    NASA Technical Reports Server (NTRS)

    Benson, Rusty A.; Mcrae, D. S.

    1991-01-01

    An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.

  1. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  2. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    NASA Astrophysics Data System (ADS)

    Lichtenberg, Tim; Schleicher, Dominik R. G.

    2015-07-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. The use of numerical approaches provides a method to tackle the weaknesses of current models and is an important tool to close gaps in poorly constrained areas such as the rapid formation of giant planets in highly evolved systems. So far, most numerical approaches make use of Lagrangian-based smoothed-particle hydrodynamics techniques or grid-based 2D axisymmetric simulations. We present a new global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code enzo. With this setup, we explore the potential impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison. Additionally, we study the effects of varying the disk radius. The physical settings involve disks with Rdisk = 10,100 and 300 AU, with a mass of Mdisk ≈ 0.05 M⊙ and a central object of subsolar mass (M⋆ = 0.646 M⊙). To validate our thermodynamical approach we include a set of simulations with a dynamically stable profile (Qinit = 3) and similar grid parameters. The development of fragmentation and the buildup of distinct clumps in the disk is strongly dependent on the chosen AMR grid settings. By combining our findings from the resolution and parameter studies we find a general lower limit criterion to be able to resolve GI induced fragmentation features and distinct clumps, which induce turbulence in the disk and seed giant planet formation. Irrespective of the physical extension of the disk, topologically disconnected clump features are only

  3. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... inventories to the refiner's or importer's perpetual inventory records. (c) Obtain separate listings of all... “ether only,” or using the assumptions in §§ 80.83(c)(1)(ii) (A) and (B) in the case of RBOB designated... sample: (1) Obtain the composite sample internal laboratory analyses results; and (2) Agree the...

  4. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  5. DMM assessments of attachment and adaptation: Procedures, validity and utility.

    PubMed

    Farnfield, Steve; Hautamäki, Airi; Nørbech, Peder; Sahhar, Nicola

    2010-07-01

    This article gives a brief over view of the Dynamic-Maturational Model of attachment and adaptation (DMM; Crittenden, 2008) together with the various DMM assessments of attachment that have been developed for specific stages of development. Each assessment is discussed in terms of procedure, outcomes, validity, advantages and limitations, comparable procedures and areas for further research and validation. The aims are twofold: to provide an introduction to DMM theory and its application that underlie the articles in this issue of CCPP; and to provide researchers and clinicians with a guide to DMM assessments.

  6. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  7. A novel hyperbolic grid generation procedure with inherent adaptive dissipation

    SciTech Connect

    Tai, C.H.; Yin, S.L.; Soong, C.Y.

    1995-01-01

    This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.

  8. A Stable, Accurate Methodology for High Mach Number, Strong Magnetic Field MHD Turbulence with Adaptive Mesh Refinement: Resolution and Refinement Studies

    NASA Astrophysics Data System (ADS)

    Li, Pak Shing; Martin, Daniel F.; Klein, Richard I.; McKee, Christopher F.

    2012-02-01

    Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma β0 of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers ({{\\cal M}_rms}= 17.3) and smaller plasma beta (β0 = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.

  9. A STABLE, ACCURATE METHODOLOGY FOR HIGH MACH NUMBER, STRONG MAGNETIC FIELD MHD TURBULENCE WITH ADAPTIVE MESH REFINEMENT: RESOLUTION AND REFINEMENT STUDIES

    SciTech Connect

    Li, Pak Shing; Klein, Richard I.; Martin, Daniel F.; McKee, Christopher F. E-mail: klein@astron.berkeley.edu E-mail: cmckee@astro.berkeley.edu

    2012-02-01

    Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma {beta}{sub 0} of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers (M{sub rms}= 17.3) and smaller plasma beta ({beta}{sub 0} = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.

  10. A learning heuristic for space mapping and searching self-organizing systems using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Phillips, Carolyn L.

    2014-09-01

    In a complex self-organizing system, small changes in the interactions between the system's components can result in different emergent macrostructures or macrobehavior. In chemical engineering and material science, such spontaneously self-assembling systems, using polymers, nanoscale or colloidal-scale particles, DNA, or other precursors, are an attractive way to create materials that are precisely engineered at a fine scale. Changes to the interactions can often be described by a set of parameters. Different contiguous regions in this parameter space correspond to different ordered states. Since these ordered states are emergent, often experiment, not analysis, is necessary to create a diagram of ordered states over the parameter space. By issuing queries to points in the parameter space (e.g., performing a computational or physical experiment), ordered states can be discovered and mapped. Queries can be costly in terms of resources or time, however. In general, one would like to learn the most information using the fewest queries. Here we introduce a learning heuristic for issuing queries to map and search a two-dimensional parameter space. Using a method inspired by adaptive mesh refinement, the heuristic iteratively issues batches of queries to be executed in parallel based on past information. By adjusting the search criteria, different types of searches (for example, a uniform search, exploring boundaries, sampling all regions equally) can be flexibly implemented. We show that this method will densely search the space, while preferentially targeting certain features. Using numerical examples, including a study simulating the self-assembly of complex crystals, we show how this heuristic can discover new regions and map boundaries more accurately than a uniformly distributed set of queries.

  11. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  12. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  13. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  14. 3-D grid refinement using the University of Michigan adaptive mesh library for a pure advective test

    NASA Astrophysics Data System (ADS)

    Oehmke, R.; Vandenberg, D.; Andronova, N.; Penner, J.; Stout, Q.; Zubov, V.; Jablonowski, C.

    2008-05-01

    The numerical representation of the partial differential equations (PDE) for high resolution atmospheric dynamical and physical features requires division of the atmospheric volume into a set of 3D grids, each of which has a not quite rectangular form. Each location on the grid contains multiple data that together represent the state of Earth's atmosphere. For successful numerical integration of the PDEs the size of each grid box is used to define the Courant-Friedrichs-Levi criterion in setting the time step. 3D adaptive representations of a sphere are needed to represent the evolution of clouds. In this paper we present the University of Michigan adaptive mesh library - a library that supports the production of parallel codes with use of adaptation on a sphere. The library manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits blocks as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells — the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. Users provide data manipulation functions for performing interpolation of user data when refining blocks. We rigorously test the library using refinement of the modeled vertical transport of a tracer with prescribed atmospheric sources and sinks. It is both a 2 and a 3D test, and bridges the performance of the model's dynamics and physics needed for inclusion of cloud formation.

  15. pH-zone-refining counter-current chromatography: Origin, mechanism, procedure and applications✩

    PubMed Central

    Ito, Yoichiro

    2012-01-01

    Since 1980, high-speed counter-current chromatography (HSCCC) has been used for separation and purification of natural and synthetic products in a standard elution mode. In 1991, a novel elution mode called pH-zone refining CCC was introduced from an incidental discovery that an organic acid in the sample solution formed the sharp peak of an acid analyte. The cause of this sharp peak formation was found to be bromoacetic acid present in the sample solution which formed a sharp trailing border to trap the acidic analyte. Further studies on the separation of DNP-amino acids with three spacer acids in the stationary phase revealed that increased sample size resulted in the formation of fused rectangular peaks, each preserving high purity and zone pH with sharp boundaries. The mechanism of this phenomenon was found to be the formation of a sharp trailing border of an acid (retainer) in the column which moves at a lower rate than that of the mobile phase. In order to facilitate the application of the method, a new method was devised using a set of retainer and eluter to form a sharp retainer rear border which moves through the column at a desired rate regardless of the composition of the two-phase solvent system. This was achieved by adding the retainer in the stationary phase and the eluter in the mobile phase at a given molar ratio. Using this new method the hydrodynamics of pH-zone-refining CCC was diagrammatically illustrated by three acidic samples. In this review paper, typical pH-zone-refining CCC separations were presented, including affinity separations with a ligand and a separation of a racemic mixture using a chiral selector in the stationary phase. Major characteristics of pH-zone-refining CCC over conventional HSCCC are as follows: the sample loading capacity is increased over 10 times; fractions are highly concentrated near saturation level; yield is improved by increasing the sample size; minute charged compounds are concentrated and detected at the peak

  16. pH-zone-refining counter-current chromatography: origin, mechanism, procedure and applications.

    PubMed

    Ito, Yoichiro

    2013-01-04

    Since 1980, high-speed counter-current chromatography (HSCCC) has been used for separation and purification of natural and synthetic products in a standard elution mode. In 1991, a novel elution mode called pH-zone refining CCC was introduced from an incidental discovery that an organic acid in the sample solution formed the sharp peak of an acid analyte. The cause of this sharp peak formation was found to be bromoacetic acid present in the sample solution which formed a sharp trailing border to trap the acidic analyte. Further studies on the separation of DNP-amino acids with three spacer acids in the stationary phase revealed that increased sample size resulted in the formation of fused rectangular peaks, each preserving high purity and zone pH with sharp boundaries. The mechanism of this phenomenon was found to be the formation of a sharp trailing border of an acid (retainer) in the column which moves at a lower rate than that of the mobile phase. In order to facilitate the application of the method, a new method was devised using a set of retainer and eluter to form a sharp retainer rear border which moves through the column at a desired rate regardless of the composition of the two-phase solvent system. This was achieved by adding the retainer in the stationary phase and the eluter in the mobile phase at a given molar ratio. Using this new method the hydrodynamics of pH-zone-refining CCC was diagrammatically illustrated by three acidic samples. In this review paper, typical pH-zone-refining CCC separations were presented, including affinity separations with a ligand and a separation of a racemic mixture using a chiral selector in the stationary phase. Major characteristics of pH-zone-refining CCC over conventional HSCCC are as follows: the sample loading capacity is increased over 10 times; fractions are highly concentrated near saturation level; yield is improved by increasing the sample size; minute charged compounds are concentrated and detected at the peak

  17. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1994-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  18. A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.

  19. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  20. A Refinement of Risk Analysis Procedures for Trichloroethylene Through the Use of Monte Carlo Method in Conjunction with Physiologically Based Pharmacokinetic Modeling

    DTIC Science & Technology

    1993-09-01

    This study refines risk analysis procedures for trichloroethylene (TCE) using a physiologically based pharmacokinetic (PBPK) model in conjunction...promulgate, and better present, more realistic standards.... Risk analysis , Physiologically based pharmacokinetics, Pbpk, Trichloroethylene, Monte carlo method.

  1. Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer

    NASA Astrophysics Data System (ADS)

    Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri

    2017-01-01

    Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self

  2. COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.

    2012-05-08

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of

  3. Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems

    SciTech Connect

    Turinsky, Paul

    2015-02-09

    This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial

  4. Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM

    SciTech Connect

    Miniatii, Francesco; Martin, Daniel

    2011-05-24

    We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical

  5. A procedure for the estimation of the numerical uncertainty of CFD calculations based on grid refinement studies

    SciTech Connect

    Eça, L.; Hoekstra, M.

    2014-04-01

    This paper offers a procedure for the estimation of the numerical uncertainty of any integral or local flow quantity as a result of a fluid flow computation; the procedure requires solutions on systematically refined grids. The error is estimated with power series expansions as a function of the typical cell size. These expansions, of which four types are used, are fitted to the data in the least-squares sense. The selection of the best error estimate is based on the standard deviation of the fits. The error estimate is converted into an uncertainty with a safety factor that depends on the observed order of grid convergence and on the standard deviation of the fit. For well-behaved data sets, i.e. monotonic convergence with the expected observed order of grid convergence and no scatter in the data, the method reduces to the well known Grid Convergence Index. Examples of application of the procedure are included. - Highlights: • Estimation of the numerical uncertainty of any integral or local flow quantity. • Least squares fits to power series expansions to handle noisy data. • Excellent results obtained for manufactured solutions. • Consistent results obtained for practical CFD calculations. • Reduces to the well known Grid Convergence Index for well-behaved data sets.

  6. Symmetry-adapted Wannier functions in the maximal localization procedure

    NASA Astrophysics Data System (ADS)

    Sakuma, R.

    2013-06-01

    A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.

  7. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  9. One "8"-shaped scleral suture to treat rhegmatogenous retinal detachment: a refined procedure of minimal scleral buckling.

    PubMed

    Min, H Y; Chen, D; Chen, Y; Dong, F T

    2014-08-28

    The aim of this study was to investigate the outcomes of one "8"-shaped scleral suture of minimal scleral buckling (MSB) surgery without sub-retinal drainage for rhegmatogenous retinal detachment (RRD) treatment. Thirty patients (30 eyes) with RRD were recruited. Thirty eyes with RRD were repaired by one "8"-shaped scleral suture of minimal buckling without subretinal drainage by one surgeon. The refined MSB procedure is described. Reattachment time and best-corrected visual acuity (BCVA) were observed. The age of the 30 patients ranged from 17 to 65 years (mean, 43.1 ± 8.6 years). The retinas of 19 eyes (63.3%) reattached within 12 h of the operations, and those of 11 eyes (67%) reattached within 72 h. The average time of follow-up was 10.4 ± 2.8 months. BCVAs were increased in 27 eyes (90%), whereas those of 3 eyes did not change. The mean preoperative BCVA was 0.738 ± 0.368 log minimal angle of resolution (MAR), and mean postoperative BCVA was 0.422 ± 0.278 logMAR, and the difference was statistically significant (P < 0.05). The sponge for buckling in only one eye exposed from the conjunctiva was taken out, and the retina remained attached. In conclusion, an "8"-shaped scleral suture of MSB without sub-retinal drainage is an efficient procedure to treat selected RRD cases.

  10. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  11. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  12. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  13. Analysis of adaptive mesh refinement for IMEX discontinuous Galerkin solutions of the compressible Euler equations with application to atmospheric simulations

    NASA Astrophysics Data System (ADS)

    Kopera, Michal A.; Giraldo, Francis X.

    2014-10-01

    The resolutions of interests in atmospheric simulations require prohibitively large computational resources. Adaptive mesh refinement (AMR) tries to mitigate this problem by putting high resolution in crucial areas of the domain. We investigate the performance of a tree-based AMR algorithm for the high order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing this to uniform reference simulations of two standard atmospheric test cases: density current and rising thermal bubble. The analysis shows up to 15 times speed-up of the AMR simulations with the cost of mesh adaptation below 1% of the total runtime. We pay particular attention to the implicit-explicit (IMEX) time integration methods and show that the ARK2 method is more robust with respect to dynamically adapting meshes than BDF2. Preliminary analysis of preconditioning reveals that it can be an important factor in the AMR overhead. The compiler optimizations provide significant runtime reduction and positively affect the effectiveness of AMR allowing for speed-ups greater than it would follow from the simple performance model.

  14. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience.

    PubMed

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being.

  15. Refining Trait Resilience: Identifying Engineering, Ecological, and Adaptive Facets from Extant Measures of Resilience

    PubMed Central

    Maltby, John; Day, Liz; Hall, Sophie

    2015-01-01

    The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197

  16. Adaptive-mesh-refinement simulation of partial coalescence cascade of a droplet at a liquid-liquid interface

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Bolster, Diogo

    2016-11-01

    A three-dimensional (3D) adaptive mesh refinement (AMR) algorithm on structured Cartesian grids is developed, and supplemented by a mesoscopic multiphase-flow solver based on state-of-the-art lattice Boltzmann methods (LBM). Using this in-house AMR-LBM routine, we present fully 3D simulations of partial coalescence of a liquid drop with an initially flat interface at small Ohnesorge and Bond numbers. Qualitatively, our numerical simulations are in excellent agreement with experimental observations. Partial coalescence cascades are successfully observed at very small Ohnesorge numbers (Oh 10-4). The fact that the partial coalescence is absent in similar 2D simulations suggests that the Rayleigh-Plateau instability may be the principle driving mechanism responsible for this phenomenon.

  17. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    SciTech Connect

    Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  18. Eutectic pattern transition under different temperature gradients: A phase field study coupled with the parallel adaptive-mesh-refinement algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Guo, Z.; Xiong, S.-M.

    2017-03-01

    Eutectic pattern transition under an externally imposed temperature gradient was studied using the phase field method coupled with a novel parallel adaptive-mesh-refinement (Para-AMR) algorithm. Numerical tests revealed that the Para-AMR algorithm could improve the computational efficiency by two orders of magnitude and thus made it possible to perform large-scale simulations without any compromising accuracy. Results showed that the direction of the temperature gradient played a crucial role in determining the eutectic patterns during solidification, which agreed well with experimental observations. In particular, the presence of the transverse temperature gradient could tilt the eutectic patterns, and in 3D simulations, the eutectic microstructure would alter from lamellar to rod-like and/or from rod-like to dumbbell-shaped. Furthermore, under a radial temperature gradient, the eutectic would evolve from a dumbbell-shaped or clover-shaped pattern to an isolated rod-like pattern.

  19. GeoClawSed: A Model with Finite Volume and Adaptive Refinement Method for Tsunami Sediment Transport

    NASA Astrophysics Data System (ADS)

    Tang, H.; Weiss, R.

    2015-12-01

    The shallow-water and advection-diffusion equations are commonly used for tsunami sediment-transport modeling. GeoClawSed is based on GeoClaw and adds a bed updating and avalanching scheme to the two-dimensional coupled system combining the shallow- water and advection-diffusion equations, which is a set of hyperbolic integral conservation laws. The modeling system consists of three coupled model components: (1) the shallow-water equations for hydrodynamics; (2) advection-diffusion equation for sediment transport; and (3) an equation for morphodynamics. For the hydrodynamic part, the finite-volume wave propagation methods (high resolution Godunov-type methods) are applied to the shallow-water equations. The well-known Riemann solver in GeoClaw is capable of dealing with diverse flow regimes present during tsunami flows. For the sediment-transport part, the advection-diffusion equation is employed to calculate the distribution of sediment in the water column. In the fully-coupled version, the advection-diffusion equation is also included in the Riemann solver. The Van Leer method is applied for calculating sediment flux in each direction. The bed updating and avalanching scheme (morphodynamics) is used for updating topography during tsunami wave propagation. Adaptive refinement method is extended to hydrodynamic part, sediment transport model and topography. GeoClawSed can evolve different resolution and accurately capture discontinuities in both flow dynamic and sediment transport. Together, GeoClawSed is designed for modeling tsunami propagation, inundation, sediment transport as well as topography change. Finally, GeoClawSed is applied for studying marine and terrestrial deposit distribution after tsunami wave. Keywords: Tsunami; Sediment Transport; Shallow Water Equations; Advection-Diffusion Equation; Adaptive Refinement Method

  20. Refining the calculation procedure for estimating the influence of flashing steam in steam turbine heaters on the increase of rotor rotation frequency during rejection of electric load

    NASA Astrophysics Data System (ADS)

    Novoselov, V. B.; Shekhter, M. V.

    2012-12-01

    A refined procedure for estimating the effect the flashing of condensate in a steam turbine's regenerative and delivery-water heaters on the increase of rotor rotation frequency during rejection of electric load is presented. The results of calculations carried out according to the proposed procedure as applied to the delivery-water and regenerative heaters of a T-110/120-12.8 turbine are given.

  1. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  2. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  3. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  4. A Domain-Decomposed Multi-Level Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)

    1998-01-01

    The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.

  5. Refinement and evaluation of helicopter real-time self-adaptive active vibration controller algorithms

    NASA Technical Reports Server (NTRS)

    Davis, M. W.

    1984-01-01

    A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.

  6. A "Rearrangement Procedure" for Scoring Adaptive Tests with Review Options

    ERIC Educational Resources Information Center

    Papanastasiou, Elena C.; Reckase, Mark D.

    2007-01-01

    Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…

  7. Mesh quality control for multiply-refined tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1994-01-01

    A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.

  8. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  9. Temperature structure of the intracluster medium from smoothed-particle hydrodynamics and adaptive-mesh refinement simulations

    SciTech Connect

    Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea

    2014-08-20

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  10. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  11. A Block-Structured Adaptive Mesh Refinement Technique with a Finite-Difference-Based Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2013-11-01

    A novel adaptive mesh refinement (AMR) algorithm for the numerical solution of fluid flow problems is presented in this study. The proposed AMR algorithm can be used to solve partial differential equations including, but not limited to, the Navier-Stokes equations using an AMR technique. Here, the lattice Boltzmann method (LBM) is employed as a substitute of the nearly incompressible Navier-Stokes equations. Besides its simplicity, the proposed AMR algorithm is straightforward and yet efficient. The idea is to remove the need for a tree-type data structure by using the pointer attributes in a unique way, along with an appropriate adjustment of the child block's IDs, to determine the neighbors of a certain block. Thanks to the unique way of invoking pointers, there is no need to construct a quad-tree (in 2D) or oct-tree (in 3D) data structure for maintaining the connectivity data between different blocks. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with a clean and efficient algorithm that is easier to implement and use on parallel machines. Several benchmark studies are carried out to assess the accuracy and efficiency of the proposed AMR-LBM, including lid-driven cavity flow, vortex shedding past a square cylinder, and Kelvin-Helmholtz instability for single-phase and multiphase fluids.

  12. A Procedure for Empirical Initialization of Adaptive Testing Algorithms.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…

  13. ADAPT: A Developmental, Asemantic, and Procedural Model for Transcoding From Verbal to Arabic Numerals

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Camos, Valerie; Perruchet, Pierre; Seron, Xavier

    2004-01-01

    This article presents a new model of transcoding numbers from verbal to arabic form. This model, called ADAPT, is developmental, asemantic, and procedural. The authors' main proposal is that the transcoding process shifts from an algorithmic strategy to the direct retrieval from memory of digital forms. Thus, the model is evolutive, adaptive, and…

  14. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    PubMed Central

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  15. Adaptive mesh refinement for singular structures in incompressible MHD and compressible Hall-MHD with electron and ion inertia

    NASA Astrophysics Data System (ADS)

    Grauer, R.; Germaschewski, K.

    The goal of this presentation is threefold. First, the role of singular structures like shocks, vortex tubes and current sheets for understanding intermittency in small scale turbulence is demonstrated. Secondly, in order to investigate the time evolution of singular structures, effective numerical techniques have to be applied, like block structured adaptive mesh refinement combined with recent advances in treating hyperbolic equations. And thirdly, the developed numerical techniques can perfectly be applied to the question of fast reconnection demonstrated by the example of compressible Hall-MHD including electron and ion inertia. 1 Why is it worth studying singular structures? The motivation for studying singular structures has several sources. In turbulent fluid and plasma flows the formation of nearly singular structures like shocks, vortex tubes or current sheets provide an effective mechanism to transport energy from large to small scales. In the last years it has become clear that the nature of the singular structures is a key feature of small scale intermittency. In a phenomenological way this is established in She-Leveque like models (She and Leveque, 1994; Grauer, Krug and Marliani, 1994; Politano and Pouquet, 1995; M¨uller and Biskamp, 2000), which are able to describe some of the scaling properties of high order structure functions. An additional source which highlights the importance of singular structures originates from studies of a toy model of turbulence, the so-called Burgers turbulence. The very left tail of the probability distribution of velocity increments can be calculated using the instanton approach (Balkovsky, Falkovich, Kolokolov and Lebedev, 1997). Here it is interesting to note that the main contribution in the relevant path integral stems from the the singular structures which are shocks in the burgers turbulence. From a mathematical point of view the question whether

  16. Introducing an on-line adaptive procedure for prostate image guided intensity modulate proton therapy.

    PubMed

    Zhang, M; Westerly, D C; Mackie, T R

    2011-08-07

    With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom

  17. Comparison of Disinfection Procedures on the Catheter Adapter-Transfer Set Junction.

    PubMed

    Firanek, Catherine; Szpara, Edward; Polanco, Patricia; Davis, Ira; Sloand, James

    2016-01-01

    Peritonitis is a significant complication of peritoneal dialysis (PD), contributing to mortality and technique failure. Suboptimal disinfection and/or a loose connection at the catheter adapter-transfer set junction are forms of touch contamination that can compromise the integrity of the sterile fluid path and lead to peritonitis. Proper use of the right disinfectants for connections at the PD catheter adapter-transfer set interface can help eliminate bacteria at surface interfaces, secure connections, and prevent bacteria from entering into the sterile fluid pathway. Three studies were conducted to assess the antibacterial effects of various disinfecting agents and procedures, and ensuing security of the catheter adapter-transfer set junction. An open-soak disinfection procedure with 10% povidone iodine improves disinfection and tightness/security of catheter adapter-transfer set connection.

  18. Multilevel adaptive solution procedure for material nonlinear problems in visual programming environment

    SciTech Connect

    Kim, D.; Ghanem, R.

    1994-12-31

    Multigrid solution technique to solve a material nonlinear problem in a visual programming environment using the finite element method is discussed. The nonlinear equation of equilibrium is linearized to incremental form using Newton-Rapson technique, then multigrid solution technique is used to solve linear equations at each Newton-Rapson step. In the process, adaptive mesh refinement, which is based on the bisection of a pair of triangles, is used to form grid hierarchy for multigrid iteration. The solution process is implemented in a visual programming environment with distributed computing capability, which enables more intuitive understanding of solution process, and more effective use of resources.

  19. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  20. An Investigation of Procedures for Computerized Adaptive Testing Using Partial Credit Scoring.

    ERIC Educational Resources Information Center

    Koch, William R.; Dodd, Barbara G.

    1989-01-01

    Various aspects of the computerized adaptive testing (CAT) procedure for partial credit scoring were manipulated, focusing on the effects of the manipulations on operational characteristics of the CAT. The effects of item-pool size, item-pool information, and stepsizes used along the trait continuum were assessed. (TJH)

  1. A procedure for refining a coiled coil protein structure using x-ray fiber diffraction and modeling.

    PubMed Central

    Briki, Fatma; Doucet, Jean; Etchebest, Catherine

    2002-01-01

    We describe a combined use of experimental and simulation techniques to configure side chains in a coiled coil structure. As already demonstrated in a previous work, x-ray diffraction patterns from hard alpha-keratin fibers in the 5.15 A meridian zone reflect the global configuration of the chi(1) dihedral angle of the coiled coil side chains. Molecular simulations, such as energy minimization and molecular dynamics, and rotameric representation in the PDB, are used here on a heterodimeric coiled coil to investigate the dihedral angle distribution along the sequence. Different procedures have been used to build the structure, the quality assessment was based on the agreement between the simulated diffraction patterns and the experimental ones in the fingerprint region of coiled coils (5.15 A). The best one for building a realistic coiled coil structure consists of placing the side chains using molecular dynamics (MD) simulations, followed by side chain positioning using SMD or SCWRL procedures. The side chains and the backbone are equilibrated during the MD until they reach an equilibrium state for the t/g(+) ratio. Positioning the side chains on the resulting backbone, using the above procedures, gives rise to a well-defined 5.15 A meridian reflection. PMID:12324400

  2. Refined numerical solution of the transonic flow past a wedge

    NASA Technical Reports Server (NTRS)

    Liang, S.-M.; Fung, K.-Y.

    1985-01-01

    A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.

  3. Three dimensional hydrodynamic calculations with adaptive mesh refinement of the evolution of Rayleigh Taylor and Richtmyer Meshkov instabilities in converging geometry: Multi-mode perturbations

    SciTech Connect

    Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.

    1993-04-01

    The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.

  4. Auto-adaptive statistical procedure for tracking structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell; Jannarone, Robert J.

    2004-07-01

    Whatever specific methods come to be preferred in the field of structural health/integrity monitoring, the associated raw data will eventually have to provide inputs for appropriate damage accumulation models and decision making protocols. The status of hardware under investigation eventually will be inferred from the evolution in time of the characteristics of this kind of functional figure of merit. Irrespective of the specific character of raw and processed data, it is desirable to develop simple, practical procedures to support damage accumulation modeling, status discrimination, and operational decision making in real time. This paper addresses these concerns and presents an auto-adaptive procedure developed to process data output from an array of many dozens of correlated sensors. These represent a full complement of information channels associated with typical structural health monitoring applications. What the algorithm does is learn in statistical terms the normal behavior patterns of the system, and against that backdrop, is configured to recognize and flag departures from expected behavior. This is accomplished using standard statistical methods, with certain proprietary enhancements employed to address issues of ill conditioning that may arise. Examples have been selected to illustrate how the procedure performs in practice. These are drawn from the fields of nondestructive testing, infrastructure management, and underwater acoustics. The demonstrations presented include the evaluation of historical electric power utilization data for a major facility, and a quantitative assessment of the performance benefits of net-centric, auto-adaptive computational procedures as a function of scale.

  5. An adaptive weighted ensemble procedure for efficient computation of free energies and first passage rates

    PubMed Central

    Bhatt, Divesh; Bahar, Ivet

    2012-01-01

    We introduce an adaptive weighted-ensemble procedure (aWEP) for efficient and accurate evaluation of first-passage rates between states for two-state systems. The basic idea that distinguishes aWEP from conventional weighted-ensemble (WE) methodology is the division of the configuration space into smaller regions and equilibration of the trajectories within each region upon adaptive partitioning of the regions themselves into small grids. The equilibrated conditional/transition probabilities between each pair of regions lead to the determination of populations of the regions and the first-passage times between regions, which in turn are combined to evaluate the first passage times for the forward and backward transitions between the two states. The application of the procedure to a non-trivial coarse–grained model of a 70-residue calcium binding domain of calmodulin is shown to efficiently yield information on the equilibrium probabilities of the two states as well as their first passage times. Notably, the new procedure is significantly more efficient than the canonical implementation of the WE procedure, and this improvement becomes even more significant at low temperatures. PMID:22979844

  6. Toxicity Determination of Explosive Contaminated Soil Leachates to Daphnia magna Using an Adapted Toxicity Characteristic Leaching Procedure

    DTIC Science & Technology

    1993-06-01

    An adapted toxicity characteristic leaching procedure was used to determine toxicity of soils to Daphnia magna . Soil samples were collected from U.S...vol/vol). Contaminated boils, Munition residues, Daphnia magna , EC50 Toxicity.

  7. Statistical inference for response adaptive randomization procedures with adjusted optimal allocation proportions.

    PubMed

    Zhu, Hongjian

    2016-12-12

    Seamless phase II/III clinical trials have attracted increasing attention recently. They mainly use Bayesian response adaptive randomization (RAR) designs. There has been little research into seamless clinical trials using frequentist RAR designs because of the difficulty in performing valid statistical inference following this procedure. The well-designed frequentist RAR designs can target theoretically optimal allocation proportions, and they have explicit asymptotic results. In this paper, we study the asymptotic properties of frequentist RAR designs with adjusted target allocation proportions, and investigate statistical inference for this procedure. The properties of the proposed design provide an important theoretical foundation for advanced seamless clinical trials. Our numerical studies demonstrate that the design is ethical and efficient.

  8. An Investigation of the Efficacy of Criterion Refinement Procedures in Mantel-Haenszel DIF Analysis. Research Report. ETS RR-13-16

    ERIC Educational Resources Information Center

    Zwick, Rebecca; Ye, Lei; Isham, Steven

    2013-01-01

    Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. Although it is often assumed that refinement of the matching criterion always provides more accurate DIF results, the actual situation proves to be more complex. To explore the effectiveness of refinement, we…

  9. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  10. Conformal refinement of unstructured quadrilateral meshes

    SciTech Connect

    Garmella, Rao

    2009-01-01

    We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.

  11. Modeling the Dust Properties of z ~ 6 Quasars with ART2—All-Wavelength Radiative Transfer with Adaptive Refinement Tree

    NASA Astrophysics Data System (ADS)

    Li, Yuexing; Hopkins, Philip F.; Hernquist, Lars; Finkbeiner, Douglas P.; Cox, Thomas J.; Springel, Volker; Jiang, Linhua; Fan, Xiaohui; Yoshida, Naoki

    2008-05-01

    The detection of large quantities of dust in z ~ 6 quasars by infrared and radio surveys presents puzzles for the formation and evolution of dust in these early systems. Previously, Li et al. showed that luminous quasars at zgtrsim 6 can form through hierarchical mergers of gas-rich galaxies, and that these systems are expected to evolve from starburst through quasar phases. Here, we calculate the dust properties of simulated quasars and their progenitors using a three-dimensional Monte Carlo radiative transfer code, ART2 (All-wavelength Radiative Transfer with Adaptive Refinement Tree). ART2 incorporates a radiative equilibrium algorithm which treats dust emission self-consistently, an adaptive grid method which can efficiently cover a large dynamic range in both spatial and density scales, a multiphase model of the interstellar medium which accounts for the observed scaling relations of molecular clouds, and a supernova-origin model for dust which can explain the existence of dust in cosmologically young objects. By applying ART2 to the hydrodynamic simulations of Li et al., we reproduce the observed spectral energy distribution (SED) and inferred dust properties of SDSS J1148+5251, the most distant Sloan quasar. We find that the dust and infrared emission are closely associated with the formation and evolution of the quasar host. The system evolves from a cold to a warm ultraluminous infrared galaxy (ULIRG) owing to heating and feedback from stars and the active galactic nucleus (AGN). Furthermore, the AGN activity has significant implications for the interpretation of observation of the hosts. Our results suggest that vigorous star formation in merging progenitors is necessary to reproduce the observed dust properties of z ~ 6 quasars, supporting a merger-driven origin for luminous quasars at high redshifts and the starburst-to-quasar evolutionary hypothesis.

  12. qPR: An adaptive partial-report procedure based on Bayesian inference

    PubMed Central

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-01-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045

  13. Procedures for Computing Transonic Flows for Control of Adaptive Wind Tunnels. Ph.D. Thesis - Technische Univ., Berlin, Mar. 1986

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer

    1987-01-01

    Numerical methods are developed for control of three dimensional adaptive test sections. The physical properties of the design problem occurring in the external field computation are analyzed, and a design procedure suited for solution of the problem is worked out. To do this, the desired wall shape is determined by stepwise modification of an initial contour. The necessary changes in geometry are determined with the aid of a panel procedure, or, with incident flow near the sonic range, with a transonic small perturbation (TSP) procedure. The designed wall shape, together with the wall deflections set during the tunnel run, are the input to a newly derived one-step formula which immediately yields the adapted wall contour. This is particularly important since the classical iterative adaptation scheme is shown to converge poorly for 3D flows. Experimental results obtained in the adaptive test section with eight flexible walls are presented to demonstrate the potential of the procedure. Finally, a method is described to minimize wall interference in 3D flows by adapting only the top and bottom wind tunnel walls.

  14. 3D Adaptive Mesh Refinement Simulations of the Gas Cloud G2 Born within the Disks of Young Stars in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position-velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  15. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    SciTech Connect

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  16. An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks

    SciTech Connect

    Coleman, Andre Michael

    2008-06-01

    The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high

  17. Adapted Physical Education, Occupational Therapy, and Physical Therapy in the Public School: Procedures and Recommended Guidelines. Procedures Manual.

    ERIC Educational Resources Information Center

    Colorado State Dept. of Education, Denver. Special Education Services Unit.

    This document is intended to provide guidance in the delivery of motor services to Colorado students with impairments in movement, sensory feedback, and sensory motor areas. Presented first is a rationale for providing adapted physical education, occupational therapy, and/or physical therapy services. The next chapter covers definitions,…

  18. Determining thresholds using adaptive procedures and psychometric fits: evaluating efficiency using theory, simulations, and human experiments.

    PubMed

    Karmali, Faisal; Chaudhuri, Shomesh E; Yi, Yongwoo; Merfeld, Daniel M

    2016-03-01

    When measuring thresholds, careful selection of stimulus amplitude can increase efficiency by increasing the precision of psychometric fit parameters (e.g., decreasing the fit parameter error bars). To find efficient adaptive algorithms for psychometric threshold ("sigma") estimation, we combined analytic approaches, Monte Carlo simulations, and human experiments for a one-interval, binary forced-choice, direction-recognition task. To our knowledge, this is the first time analytic results have been combined and compared with either simulation or human results. Human performance was consistent with theory and not significantly different from simulation predictions. Our analytic approach provides a bound on efficiency, which we compared against the efficiency of standard staircase algorithms, a modified staircase algorithm with asymmetric step sizes, and a maximum likelihood estimation (MLE) procedure. Simulation results suggest that optimal efficiency at determining threshold is provided by the MLE procedure targeting a fraction correct level of 0.92, an asymmetric 4-down, 1-up staircase targeting between 0.86 and 0.92 or a standard 6-down, 1-up staircase. Psychometric test efficiency, computed by comparing simulation and analytic results, was between 41 and 58% for 50 trials for these three algorithms, reaching up to 84% for 200 trials. These approaches were 13-21% more efficient than the commonly used 3-down, 1-up symmetric staircase. We also applied recent advances to reduce accuracy errors using a bias-reduced fitting approach. Taken together, the results lend confidence that the assumptions underlying each approach are reasonable and that human threshold forced-choice decision making is modeled well by detection theory models and mimics simulations based on detection theory models.

  19. Adaptive correction procedure for TVL1 image deblurring under impulse noise

    NASA Astrophysics Data System (ADS)

    Bai, Minru; Zhang, Xiongjun; Shao, Qianqian

    2016-08-01

    For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.

  20. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  1. Hirshfeld atom refinement.

    PubMed

    Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan

    2014-09-01

    Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.

  2. A Review of ETS Differential Item Functioning Assessment Procedures: Flagging Rules, Minimum Sample Size Requirements, and Criterion Refinement. Research Report. ETS RR-12-08

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    2012-01-01

    Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…

  3. Alpha-Stratified Multistage Computerized Adaptive Testing with beta Blocking.

    ERIC Educational Resources Information Center

    Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang

    2001-01-01

    Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)

  4. Operational Characteristics of Adaptive Testing Procedures Using the Graded Response Model.

    ERIC Educational Resources Information Center

    Dodd, Barbara G.; And Others

    1989-01-01

    General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…

  5. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  6. Model Refinement Using Eigensystem Assignment

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    2000-01-01

    IA novel approach for the refinement of finite-element-based analytical models of flexible structures is presented. The proposed approach models the possible refinements in the mass, damping, and stiffness matrices of the finite element model in the form of a constant gain feedback with acceleration, velocity, and displacement measurements, respectively. Once the free elements of the structural matrices have been defined, the problem of model refinement reduces to obtaining position, velocity, and acceleration gain matrices with appropriate sparsity that reassign a desired subset of the eigenvalues of the model, along with partial mode shapes, from their baseline values to those obtained from system identification test data. A sequential procedure is used to assign one conjugate pair of eigenvalues at each step using symmetric output feedback gain matrices, and the eigenvectors are partially assigned, while ensuring that the eigenvalues assigned in the previous steps are not disturbed. The procedure can also impose that gain matrices be dissipative to guarantee the stability of the refined model. A numerical example, involving finite element model refinement for a structural testbed at NASA Langley Research Center (Controls-Structures-Interaction Evolutionary model) is presented to demonstrate the feasibility of the proposed approach.

  7. Bayesian Procedures for Identifying Aberrant Response-Time Patterns in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Guo, Fanmin

    2008-01-01

    In order to identify aberrant response-time patterns on educational and psychological tests, it is important to be able to separate the speed at which the test taker operates from the time the items require. A lognormal model for response times with this feature was used to derive a Bayesian procedure for detecting aberrant response times.…

  8. Orthogonal Metal Cutting Simulation Using Advanced Constitutive Equations with Damage and Fully Adaptive Numerical Procedure

    NASA Astrophysics Data System (ADS)

    Saanouni, Kkemais; Labergère, Carl; Issa, Mazen; Rassineux, Alain

    2010-06-01

    This work proposes a complete adaptive numerical methodology which uses `advanced' elastoplastic constitutive equations coupling: thermal effects, large elasto-viscoplasticity with mixed non linear hardening, ductile damage and contact with friction, for 2D machining simulation. Fully coupled (strong coupling) thermo-elasto-visco-plastic-damage constitutive equations based on the state variables under large plastic deformation developed for metal forming simulation are presented. The relevant numerical aspects concerning the local integration scheme as well as the global resolution strategy and the adaptive remeshing facility are briefly discussed. Applications are made to the orthogonal metal cutting by chip formation and segmentation under high velocity. The interactions between hardening, plasticity, ductile damage and thermal effects and their effects on the adiabatic shear band formation including the formation of cracks are investigated.

  9. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  10. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  11. Simulation of metal forming processes with a 3D adaptive remeshing procedure

    NASA Astrophysics Data System (ADS)

    Zeramdini, Bessam; Robert, Camille; Germain, Guenael; Pottier, Thomas

    2016-10-01

    In this paper, a fully adaptive 3D numerical methodology based on a tetrahedral element was proposed in order to improve the finite element simulation of any metal forming process. This automatic methodology was implemented in a computational platform which integrates a finite element solver, 3D mesh generation and a field transfer algorithm. The proposed remeshing method was developed in order to solve problems associated with the severe distortion of elements subject to large deformations, to concentrate the elements where the error is large and to coarsen the mesh where the error is small. This leads to a significant reduction in the computation times while maintaining simulation accuracy. In addition, in order to enhance the contact conditions, this method has been coupled with a specific operator to maintain the initial contact between the workpiece nodes and the rigid tool after each remeshing step. In this paper special attention is paid to the data transfer methods and the necessary adaptive remeshing steps are given. Finally, a numerical example is detailed to demonstrate the efficiency of the approach and to compare the results for the different field transfer strategies.

  12. A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation

    NASA Astrophysics Data System (ADS)

    Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth

    2015-04-01

    We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.

  13. Effect of three resuscitation procedures on respiratory and metabolic adaptation to extra uterine life in newborn calves.

    PubMed

    Uystepruyst, Ch; Coghe, J; Dorts, Th; Harmegnies, N; Delsemme, M-H; Art, T; Lekeux, P

    2002-01-01

    The purpose of this study was to evaluate the effects of three resuscitation procedures on respiratory and metabolic adaptation to extra-uterine life during the first 24 h after birth in healthy newborn calves. Twenty-four newborn calves were randomly grouped into four categories: six calves did not receive any specific resuscitation procedure and were considered as controls (C); six received pharyngeal and nasal suctioning immediately after birth by use of a hand-powered vacuum pump (SUC); six received five litres of cold water poured over their heads immediately after birth (CW) and six were housed in a calf pen with an infrared radiant heater for 24 h after birth (IR). Calves were examined at birth, 5, 15, 30, 45 and 60 min, 2, 3, 6, 12 and 24 h after birth and the following measurements were recorded: physical and clinical examination, arterial blood gas analysis, pulmonary function tests using the oesophageal balloon catheter technique, arterial and venous blood acid-base balance analysis, jugular venous blood sampling for determination of metabolic, haematological and passive immune transfer variables. SUC was accompanied by improved pulmonary function efficiency and by a less pronounced decrease in body temperature. The "head shaking movement" and the subsequent temporary increase in total pulmonary resistance as well as the greater lactic acidosis due to CW were accompanied by more efficient, but statistically non-significant, pulmonary gas exchanges. IR allowed maintenance of higher body temperature without requiring increased catabolism of energetic stores. IR also caused a change in breathing pattern which contributed to better distribution of the ventilation and to slightly improved gas exchange. The results indicate that use of SUC, CW and IR modified respiratory and metabolic adaptation during the first 24 h after birth without side-effects. These resuscitation procedures should be recommended for their specific indication, i.e. cleansing of fetal

  14. Neutron diffraction study of the magnetic structures of PrMn2-xCoxGe2 (x = 0.4, 0.5 and 0.8) with a new refinement procedure

    NASA Astrophysics Data System (ADS)

    Dincer, I.; Elmali, A.; Elerman, Y.; Ehrenberg, H.; Fuess, H.; Isnard, O.

    2004-03-01

    The magnetic structures of PrMn2-xCoxGe2 (x = 0.4, 0.5 and 0.8) with ThCr2Si2-type structure have been investigated by means of neutron diffraction measurements between 2 and 312 K. We introduced a new refinement procedure to determine the magnetic moments of Pr and Mn sublattices below the rare-earth ordering temperature TCPr because of the overlapping of the magnetic reflections of the Pr and Mn sublattices. Rietveld refinements demonstrated that above the Curie temperature an intralayer antiferromagnetic ordering within (001) Mn layers is observed in PrMn1.6Co0.4Ge2 and PrMn1.5Co0.5Ge2, while the intralayer antiferromagnetic ordering within (001) Mn layers is found over the whole temperature range for PrMn1.2Co0.8Ge2. Below the Curie temperature the PrMn1.6Co0.4Ge2 and PrMn1.5Co0.5Ge2 compounds have a canted ferromagnetic structure with the canting angles 62° and 65° at 2 K, respectively. Below 75 and 70 K, a ferromagnetic ordering of the Pr sublattice is observed along the c-axis for these compounds. Below 70 K, a ferromagnetic ordering of Pr sublattices is found (not detected by magnetic measurements) along the c-axis in PrMn1.2Co0.8Ge2.

  15. Coloured Petri Net Refinement Specification and Correctness Proof with Coq

    NASA Technical Reports Server (NTRS)

    Choppy, Christine; Mayero, Micaela; Petrucci, Laure

    2009-01-01

    In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.

  16. PHYCAA+: an optimized, adaptive procedure for measuring and controlling physiological noise in BOLD fMRI.

    PubMed

    Churchill, Nathan W; Strother, Stephen C

    2013-11-15

    The presence of physiological noise in functional MRI can greatly limit the sensitivity and accuracy of BOLD signal measurements, and produce significant false positives. There are two main types of physiological confounds: (1) high-variance signal in non-neuronal tissues of the brain including vascular tracts, sinuses and ventricles, and (2) physiological noise components which extend into gray matter tissue. These physiological effects may also be partially coupled with stimuli (and thus the BOLD response). To address these issues, we have developed PHYCAA+, a significantly improved version of the PHYCAA algorithm (Churchill et al., 2011) that (1) down-weights the variance of voxels in probable non-neuronal tissue, and (2) identifies the multivariate physiological noise subspace in gray matter that is linked to non-neuronal tissue. This model estimates physiological noise directly from EPI data, without requiring external measures of heartbeat and respiration, or manual selection of physiological components. The PHYCAA+ model significantly improves the prediction accuracy and reproducibility of single-subject analyses, compared to PHYCAA and a number of commonly-used physiological correction algorithms. Individual subject denoising with PHYCAA+ is independently validated by showing that it consistently increased between-subject activation overlap, and minimized false-positive signal in non gray-matter loci. The results are demonstrated for both block and fast single-event task designs, applied to standard univariate and adaptive multivariate analysis models.

  17. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  18. Robust Refinement as Implemented in TOPAS

    SciTech Connect

    Stone, K.; Stephens, P

    2010-01-01

    A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions

  19. Refinement in reanimation of the lower face.

    PubMed

    Sherris, David A

    2004-01-01

    Both the temporalis muscle transfer and the static sling procedure are techniques that improve deglutition, speech, and aesthetics in patients who are afflicted with paralysis of the lower part of the face. A refinement that is applicable to either of these procedures is described. By bringing the perioral attachment of either the muscle or the static sling exactly to the midline of the upper and lower lips, the surgeon can make the patient's mouth more symmetrical. This simple refinement will improve the results obtained with either procedure and has not been associated with any increased perioperative risks or complications.

  20. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  1. 4D laser camera for accurate patient positioning, collision avoidance, image fusion and adaptive approaches during diagnostic and therapeutic procedures.

    PubMed

    Brahme, Anders; Nyman, Peter; Skatt, Björn

    2008-05-01

    A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow

  2. A Comparison of Content-Balancing Procedures for Estimating Multiple Clinical Domains in Computerized Adaptive Testing: Relative Precision, Validity, and Detection of Persons with Misfitting Responses

    ERIC Educational Resources Information Center

    Riley, Barth B.; Dennis, Michael L.; Conrad, Kendon J.

    2010-01-01

    This simulation study sought to compare four different computerized adaptive testing (CAT) content-balancing procedures designed for use in a multidimensional assessment with respect to measurement precision, symptom severity classification, validity of clinical diagnostic recommendations, and sensitivity to atypical responding. The four…

  3. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  4. Cerebellar cathodal tDCS interferes with recalibration and spatial realignment during prism adaptation procedure in healthy subjects.

    PubMed

    Panico, Francesco; Sagliano, Laura; Grossi, Dario; Trojano, Luigi

    2016-06-01

    The aim of this study is to clarify the specific role of the cerebellum during prism adaptation procedure (PAP), considering its involvement in early prism exposure (i.e., in the recalibration process) and in post-exposure phase (i.e., in the after-effect, related to spatial realignment). For this purpose we interfered with cerebellar activity by means of cathodal transcranial direct current stimulation (tDCS), while young healthy individuals were asked to perform a pointing task on a touch screen before, during and after wearing base-left prism glasses. The distance from the target dot in each trial (in terms of pixels) on horizontal and vertical axes was recorded and served as an index of accuracy. Results on horizontal axis, that was shifted by prism glasses, revealed that participants who received cathodal stimulation showed increased rightward deviation from the actual position of the target while wearing prisms and a larger leftward deviation from the target after prisms removal. Results on vertical axis, in which no shift was induced, revealed a general trend in the two groups to improve accuracy through the different phases of the task, and a trend, more visible in cathodal stimulated participants, to worsen accuracy from the first to the last movements in each phase. Data on horizontal axis allow to confirm that the cerebellum is involved in all stages of PAP, contributing to early strategic recalibration process, as well as to spatial realignment. On vertical axis, the improving performance across the different stages of the task and the worsening accuracy within each task phase can be ascribed, respectively, to a learning process and to the task-related fatigue.

  5. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  6. NAFTA opportunities: Petroleum refining

    SciTech Connect

    Not Available

    1993-01-01

    The North American Free Trade Agreement (NAFTA) creates a more transparent environment for the sale of refined petroleum products to Mexico, and locks in access to Canada's relatively open market for these products. Canada and Mexico are sizable United States export markets for refined petroleum products, with exports of $556 million and $864 million, respectively, in 1992. These markets represent approximately 24 percent of total U.S. exports of these goods.

  7. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  8. Refiners get petchems help

    SciTech Connect

    Wood, A.; Cornitius, T.

    1997-06-11

    The U.S.Refining Industry is facing hard times. Slow growth, tough environmental regulations, and fierce competition - especially in retail gasoline - have squeezed margins and prompted a series of mergers and acquisitions. The trend has affected the smallest and largest players, and a series of transactions over the past two years has created a new industry lineup. Among the larger companies, Mobil and Amoco are the latest to consider a refining merger. That follows recent plans by Ashland and Marathon to merge their refining businesses, and the decision by Shell, Texaco, and Saudi Aramco to combine some U.S. operations. Many of the leading independent refiners have increased their scale by acquiring refinery capacity. With refining still in the doldrums, more independents are taking a closer look at boosting production of petrochemicals, which offer high growth and, usually, better margins. That is being helped by the shift to refinery processes that favor the increased production of light olefins for alkylation and the removal of aromatics, providing opportunity to extract these materials for the petrochemical market. 5 figs., 3 tabs.

  9. US refiners bounce back

    SciTech Connect

    Not Available

    1991-02-28

    The U.S. refining sector has been whipped into high-speed decisions since the invasion of Kuwait last Summer, and its flexibility has been severely tested -- especially in the area of pricing. This issue shows facets of the roller-coaster ride such as crude oil costs, product values, and resulting margins. This issue also contains the following: (1) the ED Refining Netback Data Series for the U.S. Gulf and West Coasts, Rotterdam, and Singapore as of Feb. 22, 1991; and (2) the ED Fuel Price/Tax Series for countries of the Eastern Hemisphere, Feb. 1991 edition. 4 figs., 5 tabs.

  10. Convection in grain refining

    NASA Technical Reports Server (NTRS)

    Flemings, M. C.; Szekely, J.

    1982-01-01

    The relationship between fluid flow phenomena, nucleation, and grain refinement in solidifying metals both in the presence and in the absence of a gravitational field was investigated. The reduction of grain size in hard-to-process melts; the effects of undercooling on structure in solidification processes, including rapid solidification processing; and control of this undercooling to improve structures of solidified melts are considered. Grain refining and supercooling thermal modeling of the solidification process, and heat and fluid flow phenomena in the levitated metal droplets are described.

  11. Algorithm refinement for the stochastic Burgers' equation

    SciTech Connect

    Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org

    2007-04-10

    In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.

  12. Choices, Frameworks and Refinement

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter

    1991-01-01

    In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.

  13. Structured programming: Principles, notation, procedure

    NASA Technical Reports Server (NTRS)

    JOST

    1978-01-01

    Structured programs are best represented using a notation which gives a clear representation of the block encapsulation. In this report, a set of symbols which can be used until binding directives are republished is suggested. Structured programming also allows a new method of procedure for design and testing. Programs can be designed top down, that is, they can start at the highest program plane and can penetrate to the lowest plane by step-wise refinements. The testing methodology also is adapted to this procedure. First, the highest program plane is tested, and the programs which are not yet finished in the next lower plane are represented by so-called dummies. They are gradually replaced by the real programs.

  14. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure.

    PubMed

    Shen, Yi; Dai, Wei; Richards, Virginia M

    2015-03-01

    A MATLAB toolbox for the efficient estimation of the threshold, slope, and lapse rate of the psychometric function is described. The toolbox enables the efficient implementation of the updated maximum-likelihood (UML) procedure. The toolbox uses an object-oriented architecture for organizing the experimental variables and computational algorithms, which provides experimenters with flexibility in experimental design and data management. Descriptions of the UML procedure and the UML Toolbox are provided, followed by toolbox use examples. Finally, guidelines and recommendations of parameter configurations are given.

  15. REFINING FLUORINATED COMPOUNDS

    DOEpatents

    Linch, A.L.

    1963-01-01

    This invention relates to the method of refining a liquid perfluorinated hydrocarbon oil containing fluorocarbons from 12 to 28 carbon atoms per molecule by distilling between 150 deg C and 300 deg C at 10 mm Hg absolute pressure. The perfluorinated oil is washed with a chlorinated lower aliphatic hydrocarbon, which mairtains a separate liquid phase when mixed with the oil. Impurities detrimental to the stability of the oil are extracted by the chlorinated lower aliphatic hydrocarbon. (AEC)

  16. Refining Lurgi tar acids

    SciTech Connect

    Greco, N.P.

    1984-04-17

    There is disclosed a process for removing tar bases and neutral oils from the Lurgi tar acids by treating the tar acids with aqueous sodium bisulfate to change the tar bases to salts and to hydrolyze the neutral oils to hydrolysis products and distilling the tar acids to obtain refined tar acid as the distillate while the tar base salts and neutral oil hydrolysis products remain as residue.

  17. Refinement of the ICRF

    NASA Technical Reports Server (NTRS)

    Ma, Chopo

    2004-01-01

    Since the ICRF was generated in 1995, VLBI modeling and estimation, data quality: source position stability analysis, and supporting observational programs have improved markedly. There are developing and potential applications in the areas of space navigation Earth orientation monitoring and optical astrometry from space that would benefit from a refined ICRF with enhanced accuracy, stability and spatial distribution. The convergence of analysis, focused observations, and astrometric needs should drive the production of a new realization in the next few years.

  18. Refining retinoids with heteroatoms.

    PubMed

    Benbrook, D M

    2002-06-01

    Retinoids are a group of synthetic compounds designed to refine the numerous biological activities of retinoic acid into pharmaceuticals for several diseases, including cancer. Designs that conformationally-restricted the rotation of the structures resulted in arotinoids that were biologically active, but with increased toxicity. Incorporation of a heteroatom in one cyclic ring of the arotinoid structures drastically reduced the toxicity, while retaining biological activity. Clinical trials of a heteroarotinoid, Tazarotene, confirmed the improved chemotherapeutic ratio (efficacy/toxicity).

  19. Computations of Aerodynamic Performance Databases Using Output-Based Refinement

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2009-01-01

    Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.

  20. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  1. Local block refinement with a multigrid flow solver

    NASA Astrophysics Data System (ADS)

    Lange, C. F.; Schäfer, M.; Durst, F.

    2002-01-01

    A local block refinement procedure for the efficient computation of transient incompressible flows with heat transfer is presented. The procedure uses patched structured grids for the blockwise refinement and a parallel multigrid finite volume method with colocated primitive variables to solve the Navier-Stokes equations. No restriction is imposed on the value of the refinement rate and non-integer rates may also be used. The procedure is analysed with respect to its sensitivity to the refinement rate and to the corresponding accuracy. Several applications exemplify the advantages of the method in comparison with a common block structured grid approach. The results show that it is possible to achieve an improvement in accuracy with simultaneous significant savings in computing time and memory requirements. Copyright

  2. Minimally refined biomass fuel

    DOEpatents

    Pearson, Richard K.; Hirschfeld, Tomas B.

    1984-01-01

    A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.

  3. Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure

    PubMed Central

    Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo

    2013-01-01

    Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256

  4. Constrained Self-adaptive Solutions Procedures for Structure Subject to High Temperature Elastic-plastic Creep Effects

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1983-01-01

    This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.

  5. Scatter-plot-based method for noise characteristics evaluation in remote sensing images using adaptive image clustering procedure

    NASA Astrophysics Data System (ADS)

    Abramova, Victoriya V.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Several modifications of scatter-plot-based method for mixed noise parameters estimation are proposed. The modifications relate to the stage of image segmentation and they are intended to adaptively separate image blocks into clusters taking into account image peculiarities and to choose a required number of clusters. Comparative performance analysis of the proposed modifications for images from TID2008 database is performed. It is shown that the best estimation accuracy is provided by a method with automatic determination of a required number of clusters followed by block separation into clusters using k-means method. This modification allows improving the accuracy of noise characteristics estimation by up to 5% for both signal-independent and signal-dependent noise components in comparison to the basic method. The results for real-life data are presented.

  6. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  7. Refines Efficiency Improvement

    SciTech Connect

    WRI

    2002-05-15

    Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as

  8. Purification of Germanium Crystals by Zone Refining

    NASA Astrophysics Data System (ADS)

    Kooi, Kyler; Yang, Gang; Mei, Dongming

    2016-09-01

    Germanium zone refining is one of the most important techniques used to produce high purity germanium (HPGe) single crystals for the fabrication of nuclear radiation detectors. During zone refining the impurities are isolated to different parts of the ingot. In practice, the effective isolation of an impurity is dependent on many parameters, including molten zone travel speed, the ratio of ingot length to molten zone width, and number of passes. By studying the theory of these influential factors, perfecting our cleaning and preparation procedures, and analyzing the origin and distribution of our impurities (aluminum, boron, gallium, and phosphorous) identified using photothermal ionization spectroscopy (PTIS), we have optimized these parameters to produce HPGe. We have achieved a net impurity level of 1010 /cm3 for our zone-refined ingots, measured with van der Pauw and Hall-effect methods. Zone-refined ingots of this purity can be processed into a detector grade HPGe single crystal, which can be used to fabricate detectors for dark matter and neutrinoless double beta decay detection. This project was financially supported by DOE Grant (DE-FG02-10ER46709) and the State Governor's Research Center.

  9. Thailand: refining cultural values.

    PubMed

    Ratanakul, P

    1990-01-01

    In the second of a set of three articles concerned with "bioethics on the Pacific Rim," Ratanakul, director of a research center for Southeast Asian cultures in Thailand, provides an overview of bioethical issues in his country. He focuses on four issues: health care allocation, AIDS, determination of death, and euthanasia. The introduction of Western medicine into Thailand has brought with it a multitude of ethical problems created in part by tension between Western and Buddhist values. For this reason, Ratanakul concludes that "bioethical enquiry in Thailand must not only examine ethical dilemmas that arise in the actual practice of medicine and research in the life sciences, but must also deal with the refinement and clarification of applicable Thai cultural and moral values."

  10. Local Mesh Refinement in the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor

    2000-01-01

    A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.

  11. Dose refinement. ARAC's role

    SciTech Connect

    Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.

    1998-06-01

    The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate

  12. Acoustic Logging Modeling by Refined Biot's Equations

    NASA Astrophysics Data System (ADS)

    Plyushchenkov, Boris D.; Turchaninov, Victor I.

    An explicit uniform completely conservative finite difference scheme for the refined Biot's equations is proposed. This system is modified according to the modern theory of dynamic permeability and tortuosity in a fluid-saturated elastic porous media. The approximate local boundary transparency conditions are constructed. The acoustic logging device is simulated by the choice of appropriate boundary conditions on its external surface. This scheme and these conditions are satisfactory for exploring borehole acoustic problems in permeable formations in a real axial-symmetrical situation. The developed approach can be adapted for a nonsymmetric case also.

  13. Crystal structure refinement with SHELXL

    SciTech Connect

    Sheldrick, George M.

    2015-01-01

    New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.

  14. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  15. Monitoring, Controlling, Refining Communication Processes

    ERIC Educational Resources Information Center

    Spiess, John

    1975-01-01

    Because internal communications are essential to school system success, monitoring, controlling, and refining communicative processes have become essential activities for the chief school administrator. (Available from Buckeye Association of School Administrators, 750 Brooksedge Blvd., Westerville, Ohio 43081) (Author/IRT)

  16. Crystal structure refinement with SHELXL.

    PubMed

    Sheldrick, George M

    2015-01-01

    The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as `a CIF') containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.

  17. The evolution and refinements of varicocele surgery

    PubMed Central

    Marmar, Joel L

    2016-01-01

    Varicoceles had been recognized in clinical practice for over a century. Originally, these procedures were utilized for the management of pain but, since 1952, the repairs had been mostly for the treatment of male infertility. However, the diagnosis and treatment of varicoceles were controversial, because the pathophysiology was not clear, the entry criteria of the studies varied among centers, and there were few randomized clinical trials. Nevertheless, clinicians continued developing techniques for the correction of varicoceles, basic scientists continued investigations on the pathophysiology of varicoceles, and new outcome data from prospective randomized trials have appeared in the world's literature. Therefore, this special edition of the Asian Journal of Andrology was proposed to report much of the new information related to varicoceles and, as a specific part of this project, the present article was developed as a comprehensive review of the evolution and refinements of the corrective procedures. PMID:26732111

  18. Refining the shifted topological vertex

    SciTech Connect

    Drissi, L. B.; Jehjouh, H.; Saidi, E. H.

    2009-01-15

    We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.

  19. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water.

  20. Bauxite Mining and Alumina Refining

    PubMed Central

    Frisch, Neale; Olney, David

    2014-01-01

    Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720

  1. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  2. Mutation at positively selected positions in the binding site for HLA-C shows that KIR2DL1 is a more refined but less adaptable NK cell receptor than KIR2DL3.

    PubMed

    Hilton, Hugo G; Vago, Luca; Older Aguilar, Anastazia M; Moesta, Achim K; Graef, Thorsten; Abi-Rached, Laurent; Norman, Paul J; Guethlein, Lisbeth A; Fleischhauer, Katharina; Parham, Peter

    2012-08-01

    Through recognition of HLA class I, killer cell Ig-like receptors (KIR) modulate NK cell functions in human immunity and reproduction. Although a minority of HLA-A and -B allotypes are KIR ligands, HLA-C allotypes dominate this regulation, because they all carry either the C1 epitope recognized by KIR2DL2/3 or the C2 epitope recognized by KIR2DL1. The C1 epitope and C1-specific KIR evolved first, followed several million years later by the C2 epitope and C2-specific KIR. Strong, varying selection pressure on NK cell functions drove the diversification and divergence of hominid KIR, with six positions in the HLA class I binding site of KIR being targets for positive diversifying selection. Introducing each naturally occurring residue at these positions into KIR2DL1 and KIR2DL3 produced 38 point mutants that were tested for binding to 95 HLA- A, -B, and -C allotypes. Modulating specificity for HLA-C is position 44, whereas positions 71 and 131 control cross-reactivity with HLA-A*11:02. Dominating avidity modulation is position 70, with lesser contributions from positions 68 and 182. KIR2DL3 has lower avidity and broader specificity than KIR2DL1. Mutation could increase the avidity and change the specificity of KIR2DL3, whereas KIR2DL1 specificity was resistant to mutation, and its avidity could only be lowered. The contrasting inflexibility of KIR2DL1 and adaptability of KIR2DL3 fit with C2-specific KIR having evolved from C1-specific KIR, and not vice versa. Substitutions restricted to activating KIR all reduced the avidity of KIR2DL1 and KIR2DL3, further evidence that activating KIR function often becomes subject to selective attenuation.

  3. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  4. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, B.; Heestand, R.L.

    1982-08-31

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  5. Method for refining contaminated iridium

    DOEpatents

    Heshmatpour, Bahman; Heestand, Richard L.

    1983-01-01

    Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.

  6. Multigrid for refined triangle meshes

    SciTech Connect

    Shapira, Yair

    1997-02-01

    A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.

  7. Refining analgesia strategies using lasers.

    PubMed

    Hampshire, Victoria

    2015-08-01

    Sound programs for the humane care and use of animals within research facilities incorporate experimental refinements such as multimodal approaches for pain management. These approaches can include non-traditional strategies along with more established ones. The use of lasers for pain relief is growing in popularity among companion animal veterinary practitioners and technologists. Therefore, its application in the research sector warrants closer consideration.

  8. GRAIN REFINEMENT OF URANIUM BILLETS

    DOEpatents

    Lewis, L.

    1964-02-25

    A method of refining the grain structure of massive uranium billets without resort to forging is described. The method consists in the steps of beta- quenching the billets, annealing the quenched billets in the upper alpha temperature range, and extrusion upset of the billets to an extent sufficient to increase the cross sectional area by at least 5 per cent. (AEC)

  9. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  10. French refiners grappling with new octane specs, environmental rules

    SciTech Connect

    Not Available

    1991-11-18

    After emerging from the doldrums of the 1980s, France's refining industry faces new challenges to meet tightening gasoline and diesel specifications and greater environmental pressures. This paper reports on three stage investment program that is under way to adapt the pared down and restructured plant network to satisfy a changing products market. Current outlays, involving the first stage of investments, are geared to the rapidly developing unleaded gasoline market to meet quantity and quality requirements.

  11. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  12. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Astrophysics Data System (ADS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-11-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  13. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  14. A Refined Cauchy-Schwarz Inequality

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2007-01-01

    The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.

  15. Reformulated Gasoline Market Affected Refiners Differently, 1995

    EIA Publications

    1996-01-01

    This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.

  16. Firing of pulverized solvent refined coal

    DOEpatents

    Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.

    1986-01-01

    An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.

  17. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2010-07-20

    thermodynamics . 2) Experimental verify the effectiveness of possible nucleating compounds. 3) Extend grain refinement theory and solidification...knowledge through experimental data. 4) Determine structure property relationships for the examined grain refiners. 5) Formulate processing techniques for...using grain refiners in the steel casting industry. During Fiscal Year 2010, this project worked on determining structure property -relationships

  18. Solution-Adaptive Cartesian Cell Approach for Viscous and Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1996-01-01

    A Cartesian cell-based approach for adaptively refined solutions of the Euler and Navier-Stokes equations in two dimensions is presented. Grids about geometrically complicated bodies are generated automatically, by the recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal cut cells are created using modified polygon-clipping algorithms. The grid is stored in a binary tree data structure that provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite volume formulation. The convective terms are upwinded: A linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The results of a study comparing the accuracy and positivity of two classes of cell-centered, viscous gradient reconstruction procedures is briefly summarized. Adaptively refined solutions of the Navier-Stokes equations are shown using the more robust of these gradient reconstruction procedures, where the results computed by the Cartesian approach are compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.

  19. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  20. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  1. Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations

    DTIC Science & Technology

    1983-03-01

    grids. We use either the Coarse Mesh Approximation fethod ( Ciment , [1971]) or interpolation from a coarser grid to get the boundary values. In Berger...Problems, Math. Conp. 31 (1977), 333-390. M. Ciment , Stable Difference Schemes with Uneven Mesh Spacings, Math. Comp. 25 (1971), 219-227. H. Cramr

  2. Grain Refinement of Deoxidized Copper

    NASA Astrophysics Data System (ADS)

    Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun

    2016-10-01

    This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor ( Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.

  3. Crystallographic refinement of ligand complexes

    PubMed Central

    Kleywegt, Gerard J.

    2007-01-01

    Model building and refinement of complexes between biomacromolecules and small molecules requires sensible starting coordinates as well as the specification of restraint sets for all but the most common non-macromolecular entities. Here, it is described why this is necessary, how it can be accomplished and what pitfalls need to be avoided in order to produce chemically plausible models of the low-molecular-weight entities. A number of programs, servers, databases and other resources that can be of assistance in the process are also discussed. PMID:17164531

  4. Improved successive refinement for wavelet-based embedded image compression

    NASA Astrophysics Data System (ADS)

    Creusere, Charles D.

    1999-10-01

    In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.

  5. Using output to evaluate and refine rules in rule-based expert systems

    NASA Technical Reports Server (NTRS)

    St.clair, D. C.; Bond, W. E.; Flachsbart, B. B.

    1987-01-01

    The techniques described provide an effective tool which knowledge engineers and domain experts can utilize to help in evaluating and refining rules. These techniques have been used successfully as learning mechanisms in a prototype adaptive diagnostic expert system and are applicable to other types of expert systems. The degree to which they constitute complete evaluation/refinement of an expert system depends on the thoroughness of their use.

  6. A novel two-stage discrete crack method based on the screened Poisson equation and local mesh refinement

    NASA Astrophysics Data System (ADS)

    Areias, P.; Rabczuk, T.; de Sá, J. César

    2016-12-01

    We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.

  7. More Refined Experiments with Hemoglobin.

    ERIC Educational Resources Information Center

    Morin, Phillippe

    1985-01-01

    Discusses materials needed, procedures used, and typical results obtained for experiments designed to make a numerical stepwise study of the oxygenation of hemoglobin, myoglobin, and other oxygen carriers. (JN)

  8. Three-dimensional adaptive grid-embedding Euler technique

    NASA Astrophysics Data System (ADS)

    Davis, Roger L.; Dannenhoffer, John F., III

    1994-06-01

    A new three-dimensional adaptive-grid Euler procedure is presented that automatically detects high-gradient regions in the flow and locally subdivides the computational grid in these regions to provide a uniform, high level of accuracy over the entire domain. A tunable, semistructured data system is utilized that provides global topological unstructured-grid flexibility along with the efficiency of a local, structured-grid system. In addition, this structure data allows for the flow solution algorithm to be executed on a wide variety of parallel/vector computing platforms. An explicit, time-marching, control volume procedure is used to integrate the Euler equations to a steady state. In addition, a multiple-grid procedure is used throughout the embedded-grid regions as well as on subgrids coarser than the initial grid to accelerate convergence and properly propagate disturbance waves through refined-grid regions. Upon convergence, high flow gradient regions, where it is assumed that large truncation errors in the solution exist, are detected using a combination of directional refinement vectors that have large components in areas of these gradients. The local computational grid is directionally subdivided in these regions and the flow solution is reinitiated. Overall convergence occurs when a prespecified level of accuracy is reached. Solutions are presented that demonstrate the efficiency and accuracy of the present procedure.

  9. Thermal Adaptation Methods of Urban Plaza Users in Asia’s Hot-Humid Regions: A Taiwan Case Study

    PubMed Central

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-01-01

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis—Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)—were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung’s Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia. PMID:26516881

  10. Thermal Adaptation Methods of Urban Plaza Users in Asia's Hot-Humid Regions: A Taiwan Case Study.

    PubMed

    Wu, Chen-Fa; Hsieh, Yen-Fen; Ou, Sheng-Jung

    2015-10-27

    Thermal adaptation studies provide researchers great insight to help understand how people respond to thermal discomfort. This research aims to assess outdoor urban plaza conditions in hot and humid regions of Asia by conducting an evaluation of thermal adaptation. We also propose that questionnaire items are appropriate for determining thermal adaptation strategies adopted by urban plaza users. A literature review was conducted and first hand data collected by field observations and interviews used to collect information on thermal adaptation strategies. Item analysis--Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA)--were applied to refine the questionnaire items and determine the reliability of the questionnaire evaluation procedure. The reliability and validity of items and constructing process were also analyzed. Then, researchers facilitated an evaluation procedure for assessing the thermal adaptation strategies of urban plaza users in hot and humid regions of Asia and formulated a questionnaire survey that was distributed in Taichung's Municipal Plaza in Taiwan. Results showed that most users responded with behavioral adaptation when experiencing thermal discomfort. However, if the thermal discomfort could not be alleviated, they then adopted psychological strategies. In conclusion, the evaluation procedure for assessing thermal adaptation strategies and the questionnaire developed in this study can be applied to future research on thermal adaptation strategies adopted by urban plaza users in hot and humid regions of Asia.

  11. Materials refining on the Moon

    NASA Astrophysics Data System (ADS)

    Landis, Geoffrey A.

    2007-05-01

    Oxygen, metals, silicon, and glass are raw materials that will be required for long-term habitation and production of structural materials and solar arrays on the Moon. A process sequence is proposed for refining these materials from lunar regolith, consisting of separating the required materials from lunar rock with fluorine. The fluorine is brought to the Moon in the form of potassium fluoride, and is liberated from the salt by electrolysis in a eutectic salt melt. Tetrafluorosilane produced by this process is reduced to silicon by a plasma reduction stage; the fluorine salts are reduced to metals by reaction with metallic potassium. Fluorine is recovered from residual MgF and CaF2 by reaction with K2O.

  12. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  13. Silicon refinement by chemical vapor transport

    NASA Technical Reports Server (NTRS)

    Olson, J.

    1984-01-01

    Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.

  14. 1988 worldwide refining and gas processing directory

    SciTech Connect

    Not Available

    1987-01-01

    Innumerable revisions in names, addresses, phone numbers, telex numbers, and cable numbers have been made since the publication of the previous edition. This directory also contains several of the most vital and informative surveys of the petroleum industry including the U.S. Refining Survey, The Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Eax Capacities Report from the National Petroleum Refiner's Association.

  15. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  16. Adaptive Image Denoising by Mixture Adaptation.

    PubMed

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  17. Refining the shallow slip deficit

    NASA Astrophysics Data System (ADS)

    Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois

    2016-03-01

    Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include

  18. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    PubMed

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  19. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated

  20. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  1. Anomalies in the refinement of isoleucine

    SciTech Connect

    Berntsen, Karen R. M.; Vriend, Gert

    2014-04-01

    The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.

  2. Adaptation of the Illness Trajectory Theory to Describe the Work of Transitional Cancer Survivorship

    PubMed Central

    Klimmek, Rachel; Wenzel, Jennifer

    2013-01-01

    Purpose/Objectives Although frameworks for understanding survivorship continue to evolve, most are abstract and do not address the complex context of survivors’ transition following treatment completion. The purpose of this theory adaptation was to examine and refine the Illness Trajectory Theory, which describes the work of managing chronic illness, to address transitional cancer survivorship. Data Sources CINAHL, PubMed, and relevant Institute of Medicine reports were searched for survivors’ experiences during the year following treatment. Data Synthesis Using an abstraction tool, sixty-eight articles were selected from the initial search (N>700). Abstracted data were placed into a priori categories refined according to recommended procedures for theory derivation, followed by expert review. Conclusions Derivation resulted in a framework describing “the work of transitional cancer survivorship” (TCS work). TCS work is defined as survivor tasks, performed alone or with others, to carry out a plan of action for managing one or more aspects of life following primary cancer treatment. Theoretically, survivors engage in 3 reciprocally-interactive lines of work: (1) illness-related; (2) biographical; and (3) everyday life work. Adaptation resulted in refinement of these domains and the addition of survivorship care planning under “illness-related work”. Implications for Nursing Understanding this process of work may allow survivors/co-survivors to better prepare for the post-treatment period. This adaptation provides a framework for future testing and development. Validity and utility of this framework within specific survivor populations should also be explored. PMID:23107863

  3. Modeling Languages Refine Vehicle Design

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Cincinnati, Ohio s TechnoSoft Inc. is a leading provider of object-oriented modeling and simulation technology used for commercial and defense applications. With funding from Small Business Innovation Research (SBIR) contracts issued by Langley Research Center, the company continued development on its adaptive modeling language, or AML, originally created for the U.S. Air Force. TechnoSoft then created what is now known as its Integrated Design and Engineering Analysis Environment, or IDEA, which can be used to design a variety of vehicles and machinery. IDEA's customers include clients in green industries, such as designers for power plant exhaust filtration systems and wind turbines.

  4. North Dakota Refining Capacity Study

    SciTech Connect

    Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca

    2011-01-05

    According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.

  5. Quantifying the adaptive cycle

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  6. Quantifying the Adaptive Cycle.

    PubMed

    Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  7. DSR: enhanced modelling and refinement of disordered structures with SHELXL

    PubMed Central

    Kratzert, Daniel; Holstein, Julian J.; Krossing, Ingo

    2015-01-01

    One of the remaining challenges in single-crystal structure refinement is the proper description of disorder in crystal structures. This paper describes a computer program that performs semi-automatic modelling of disordered moieties in SHELXL [Sheldrick (2015 ▶). Acta Cryst. C71, 3–8.]. The new program contains a database that includes molecular fragments and their corresponding stereochemical restraints, and a placement procedure to place these fragments on the desired position in the unit cell. The program is also suitable for speeding up model building of well ordered crystal structures. PMID:26089767

  8. Dynamics and Adaptive Control for Stability Recovery of Damaged Aircraft

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Krishnakumar, Kalmanje; Kaneshige, John; Nespeca, Pascal

    2006-01-01

    This paper presents a recent study of a damaged generic transport model as part of a NASA research project to investigate adaptive control methods for stability recovery of damaged aircraft operating in off-nominal flight conditions under damage and or failures. Aerodynamic modeling of damage effects is performed using an aerodynamic code to assess changes in the stability and control derivatives of a generic transport aircraft. Certain types of damage such as damage to one of the wings or horizontal stabilizers can cause the aircraft to become asymmetric, thus resulting in a coupling between the longitudinal and lateral motions. Flight dynamics for a general asymmetric aircraft is derived to account for changes in the center of gravity that can compromise the stability of the damaged aircraft. An iterative trim analysis for the translational motion is developed to refine the trim procedure by accounting for the effects of the control surface deflection. A hybrid direct-indirect neural network, adaptive flight control is proposed as an adaptive law for stabilizing the rotational motion of the damaged aircraft. The indirect adaptation is designed to estimate the plant dynamics of the damaged aircraft in conjunction with the direct adaptation that computes the control augmentation. Two approaches are presented 1) an adaptive law derived from the Lyapunov stability theory to ensure that the signals are bounded, and 2) a recursive least-square method for parameter identification. A hardware-in-the-loop simulation is conducted and demonstrates the effectiveness of the direct neural network adaptive flight control in the stability recovery of the damaged aircraft. A preliminary simulation of the hybrid adaptive flight control has been performed and initial data have shown the effectiveness of the proposed hybrid approach. Future work will include further investigations and high-fidelity simulations of the proposed hybrid adaptive Bight control approach.

  9. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  10. On the Factor Refinement Principle and its Implementation on Multicore Architectures

    NASA Astrophysics Data System (ADS)

    Mohsin Ali, Md; Moreno Maza, Marc; Xie, Yuzhen

    2012-10-01

    We propose a divide and conquer adaptation of the factor refinement algorithm of Bach, Driscoll and Shallit. For an ideal cache of Z words, with L words per block, the original approach suffers from O(n2/L) cache misses, meanwhile our adaptation incurs O(n2/ZL) cache misses only. We have realized a multithreaded implementation of the latter using Cilk++ targeting multicores. Our code achieves linear speedup on 16 cores for sufficiently large input data.

  11. Protein NMR structures refined without NOE data.

    PubMed

    Ryu, Hyojung; Kim, Tae-Rae; Ahn, SeonJoo; Ji, Sunyoung; Lee, Jinhyuk

    2014-01-01

    The refinement of low-quality structures is an important challenge in protein structure prediction. Many studies have been conducted on protein structure refinement; the refinement of structures derived from NMR spectroscopy has been especially intensively studied. In this study, we generated flat-bottom distance potential instead of NOE data because NOE data have ambiguity and uncertainty. The potential was derived from distance information from given structures and prevented structural dislocation during the refinement process. A simulated annealing protocol was used to minimize the potential energy of the structure. The protocol was tested on 134 NMR structures in the Protein Data Bank (PDB) that also have X-ray structures. Among them, 50 structures were used as a training set to find the optimal "width" parameter in the flat-bottom distance potential functions. In the validation set (the other 84 structures), most of the 12 quality assessment scores of the refined structures were significantly improved (total score increased from 1.215 to 2.044). Moreover, the secondary structure similarity of the refined structure was improved over that of the original structure. Finally, we demonstrate that the combination of two energy potentials, statistical torsion angle potential (STAP) and the flat-bottom distance potential, can drive the refinement of NMR structures.

  12. Femtosecond infrared intrastromal ablation and backscattering-mode adaptive-optics multiphoton microscopy in chicken corneas

    PubMed Central

    Gualda, Emilio J.; Vázquez de Aldana, Javier R.; Martínez-García, M. Carmen; Moreno, Pablo; Hernández-Toro, Juan; Roso, Luis; Artal, Pablo; Bueno, Juan M.

    2011-01-01

    The performance of femtosecond (fs) laser intrastromal ablation was evaluated with backscattering-mode adaptive-optics multiphoton microscopy in ex vivo chicken corneas. The pulse energy of the fs source used for ablation was set to generate two different ablation patterns within the corneal stroma at a certain depth. Intrastromal patterns were imaged with a custom adaptive-optics multiphoton microscope to determine the accuracy of the procedure and verify the outcomes. This study demonstrates the potential of using fs pulses as surgical and monitoring techniques to systematically investigate intratissue ablation. Further refinement of the experimental system by combining both functions into a single fs laser system would be the basis to establish new techniques capable of monitoring corneal surgery without labeling in real-time. Since the backscattering configuration has also been optimized, future in vivo implementations would also be of interest in clinical environments involving corneal ablation procedures. PMID:22076258

  13. Femtosecond infrared intrastromal ablation and backscattering-mode adaptive-optics multiphoton microscopy in chicken corneas.

    PubMed

    Gualda, Emilio J; Vázquez de Aldana, Javier R; Martínez-García, M Carmen; Moreno, Pablo; Hernández-Toro, Juan; Roso, Luis; Artal, Pablo; Bueno, Juan M

    2011-11-01

    The performance of femtosecond (fs) laser intrastromal ablation was evaluated with backscattering-mode adaptive-optics multiphoton microscopy in ex vivo chicken corneas. The pulse energy of the fs source used for ablation was set to generate two different ablation patterns within the corneal stroma at a certain depth. Intrastromal patterns were imaged with a custom adaptive-optics multiphoton microscope to determine the accuracy of the procedure and verify the outcomes. This study demonstrates the potential of using fs pulses as surgical and monitoring techniques to systematically investigate intratissue ablation. Further refinement of the experimental system by combining both functions into a single fs laser system would be the basis to establish new techniques capable of monitoring corneal surgery without labeling in real-time. Since the backscattering configuration has also been optimized, future in vivo implementations would also be of interest in clinical environments involving corneal ablation procedures.

  14. Shading-based DEM refinement under a comprehensive imaging model

    NASA Astrophysics Data System (ADS)

    Peng, Jianwei; Zhang, Yi; Shan, Jie

    2015-12-01

    This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.

  15. Adaptation to hot environmental conditions: an exploration of the performance basis, procedures and future directions to optimise opportunities for elite athletes.

    PubMed

    Guy, Joshua H; Deakin, Glen B; Edwards, Andrew M; Miller, Catherine M; Pyne, David B

    2015-03-01

    Extreme environmental conditions present athletes with diverse challenges; however, not all sporting events are limited by thermoregulatory parameters. The purpose of this leading article is to identify specific instances where hot environmental conditions either compromise or augment performance and, where heat acclimation appears justified, evaluate the effectiveness of pre-event acclimation processes. To identify events likely to be receptive to pre-competition heat adaptation protocols, we clustered and quantified the magnitude of difference in performance of elite athletes competing in International Association of Athletics Federations (IAAF) World Championships (1999-2011) in hot environments (>25 °C) with those in cooler temperate conditions (<25 °C). Athletes in endurance events performed worse in hot conditions (~3 % reduction in performance, Cohen's d > 0.8; large impairment), while in contrast, performance in short-duration sprint events was augmented in the heat compared with temperate conditions (~1 % improvement, Cohen's d > 0.8; large performance gain). As endurance events were identified as compromised by the heat, we evaluated common short-term heat acclimation (≤7 days, STHA) and medium-term heat acclimation (8-14 days, MTHA) protocols. This process identified beneficial effects of heat acclimation on performance using both STHA (2.4 ± 3.5 %) and MTHA protocols (10.2 ± 14.0 %). These effects were differentially greater for MTHA, which also demonstrated larger reductions in both endpoint exercise heart rate (STHA: -3.5 ± 1.8 % vs MTHA: -7.0 ± 1.9 %) and endpoint core temperature (STHA: -0.7 ± 0.7 % vs -0.8 ± 0.3 %). It appears that worthwhile acclimation is achievable for endurance athletes via both short-and medium-length protocols but more is gained using MTHA. Conversely, it is also conceivable that heat acclimation may be counterproductive for sprinters. As high-performance athletes are often time-poor, shorter duration protocols may

  16. Refining of metallurgical-grade silicon

    NASA Technical Reports Server (NTRS)

    Dietl, J.

    1986-01-01

    A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.

  17. Firing of pulverized solvent refined coal

    DOEpatents

    Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.

    1990-05-15

    A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.

  18. Increasingly automated procedure acquisition in dynamic systems

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Kedar, Smadar

    1992-01-01

    Procedures are widely used by operators for controlling complex dynamic systems. Currently, most development of such procedures is done manually, consuming a large amount of paper, time, and manpower in the process. While automated knowledge acquisition is an active field of research, not much attention has been paid to the problem of computer-assisted acquisition and refinement of complex procedures for dynamic systems. The Procedure Acquisition for Reactive Control Assistant (PARC), which is designed to assist users in more systematically and automatically encoding and refining complex procedures. PARC is able to elicit knowledge interactively from the user during operation of the dynamic system. We categorize procedure refinement into two stages: diagnosis - diagnose the failure and choose a repair - and repair - plan and perform the repair. The basic approach taken in PARC is to assist the user in all steps of this process by providing increased levels of assistance with layered tools. We illustrate the operation of PARC in refining procedures for the control of a robot arm.

  19. Enzyme immunoassays and related procedures in diagnostic medical virology

    PubMed Central

    Kurstak, Edouard; Tijssen, Peter; Kurstak, Christine; Morisset, Richard

    1986-01-01

    This review article describes several applications of the widely used enzyme immunoassay (EIA) procedure. EIA methods have been adapted to solve problems in diagnostic virology where sensitivity, specificity, or practicability is required. Concurrent developments in hybridoma and conjugation methods have increased significantly the use of these assays. A general overview of EIA methods is given together with typical examples of their use in diagnostic medical virology; attention is drawn to possible pitfalls. Recent advances in recombinant DNA technology have made it possible to produce highly specific nucleic acid probes that have a sensitivity approximately 100 times greater than that of EIA. Some applications of these probes are described. Although the non-labelled nucleic acid probes for use in the field are not as refined as non-labelled immunoassays, their range of applications is expected to expand rapidly in the near future. ImagesFig. 4 PMID:3533302

  20. New isobaric lignans from Refined Olive Oils as quality markers for Virgin Olive Oils.

    PubMed

    Cecchi, Lorenzo; Innocenti, Marzia; Melani, Fabrizio; Migliorini, Marzia; Conte, Lanfranco; Mulinacci, Nadia

    2017-03-15

    Herein we describe the influence of olive oil refining processes on the lignan profile. The detection of new isobaric lignans is suggested to reveal frauds in commercial extra-Virgin Olive Oils. We analyzed five commercial olive oils by HPLC-DAD-TOF/MS to evaluate their lignan content and detected, for the first time, some isobaric forms of natural (+)-pinoresinol and (+)-1-acetoxypinoresinol. Then we analyzed partially and fully-refined oils from Italy, Tunisia and Spain. The isobaric forms occur only during the bleaching step of the refining process and remain unaltered after the final deodorizing step. Molecular dynamic simulation helped to identify the most probable chemical structures corresponding to these new isobars with data in agreement with the chromatographic findings. The total lignan amounts in commercial olive oils was close to 2mg/L. Detection of these new lignans can be used as marker of undeclared refining procedures in commercial extra-virgin and/or Virgin Olive Oils.

  1. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  2. Refined Phenotyping of Modic Changes

    PubMed Central

    Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino

    2016-01-01

    . The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491

  3. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  4. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  5. 1987 worldwide refining and gas processing directory

    SciTech Connect

    Not Available

    1986-01-01

    This book delineates an ever-varying aspect of the industry. Personnel names, plant sites, home office locations, sales and relocations - all have been compiled in this book. Inactive refineries have been updated and listed in a special section as well as active major refining and gas processing and construction projects worldwide. This directory also contains several of the most vital and informative surveys of the petroleum industry. It discusses the worldwide Construction Survey, U.S. Refining Survey, Worldwide Gas Processing Plant Survey, Worldwide Refining Survey, Worldwide Survey of Petroleum Derived Sulfur Production, and Worldwide Catalyst Report. Also included in the directory is the National Petroleum Refiners Association's U.S. and Canadian Lube and Wax Capacities Study.

  6. U.S. Refining Capacity Utilization

    EIA Publications

    1995-01-01

    This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.

  7. On-Orbit Model Refinement for Controller Redesign

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.

    1998-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.

  8. Refiners discuss HF alkylation process and issues

    SciTech Connect

    Not Available

    1992-04-06

    Safety and oxygenate operations made HF alkylation a hot topic of discussion at the most recent National Petroleum Refiners Association annual question and answer session on refining and petrochemical technology. This paper provides answers to a variety of questions regarding the mechanical, process, and safety aspects of the HF alkylation process. Among the issues discussed were mitigation techniques, removal of oxygenates from alkylation unit feed, and amylene alkylation.

  9. Refinement of protein structures in explicit solvent.

    PubMed

    Linge, Jens P; Williams, Mark A; Spronk, Christian A E M; Bonvin, Alexandre M J J; Nilges, Michael

    2003-02-15

    We present a CPU efficient protocol for refinement of protein structures in a thin layer of explicit solvent and energy parameters with completely revised dihedral angle terms. Our approach is suitable for protein structures determined by theoretical (e.g., homology modeling or threading) or experimental methods (e.g., NMR). In contrast to other recently proposed refinement protocols, we put a strong emphasis on consistency with widely accepted covalent parameters and computational efficiency. We illustrate the method for NMR structure calculations of three proteins: interleukin-4, ubiquitin, and crambin. We show a comparison of their structure ensembles before and after refinement in water with and without a force field energy term for the dihedral angles; crambin was also refined in DMSO. Our results demonstrate the significant improvement of structure quality by a short refinement in a thin layer of solvent. Further, they show that a dihedral angle energy term in the force field is beneficial for structure calculation and refinement. We discuss the optimal weight for the energy constant for the backbone angle omega and include an extensive discussion of meaning and relevance of the calculated validation criteria, in particular root mean square Z scores for covalent parameters such as bond lengths.

  10. Structure refinement from precession electron diffraction data.

    PubMed

    Palatinus, Lukáš; Jacob, Damien; Cuvillier, Priscille; Klementová, Mariana; Sinkler, Wharton; Marks, Laurence D

    2013-03-01

    Electron diffraction is a unique tool for analysing the crystal structures of very small crystals. In particular, precession electron diffraction has been shown to be a useful method for ab initio structure solution. In this work it is demonstrated that precession electron diffraction data can also be successfully used for structure refinement, if the dynamical theory of diffraction is used for the calculation of diffracted intensities. The method is demonstrated on data from three materials - silicon, orthopyroxene (Mg,Fe)(2)Si(2)O(6) and gallium-indium tin oxide (Ga,In)(4)Sn(2)O(10). In particular, it is shown that atomic occupancies of mixed crystallographic sites can be refined to an accuracy approaching X-ray or neutron diffraction methods. In comparison with conventional electron diffraction data, the refinement against precession diffraction data yields significantly lower figures of merit, higher accuracy of refined parameters, much broader radii of convergence, especially for the thickness and orientation of the sample, and significantly reduced correlations between the structure parameters. The full dynamical refinement is compared with refinement using kinematical and two-beam approximations, and is shown to be superior to the latter two.

  11. 76 FR 49468 - Tesoro Refining and Marketing Company, SFPP, L.P.; Notice of Complaint

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Tesoro Refining and Marketing Company, SFPP, L.P.; Notice of Complaint Take notice that on August 2, 2011, pursuant to Rule 206 of the Rules of Practice and Procedure of the Federal Energy Regulatory Commission...

  12. First principles potential for the acetylene dimer and refinement by fitting to experiments

    NASA Astrophysics Data System (ADS)

    Leforestier, Claude; Tekin, Adem; Jansen, Georg; Herman, Michel

    2011-12-01

    We report the definition and refinement of a new first principles potential for the acetylene dimer. The ab initio calculations were performed with the DFT-SAPT combination of symmetry-adapted intermolecular perturbation method and density functional theory, and fitted to a model site-site functional form. Comparison of the calculated microwave spectrum with experimental data revealed that the barriers to isomerization were too low. This potential was refined by fitting the model parameters in order to reproduce the observed transitions, an excellent agreement within ˜1 MHz being achieved.

  13. Zeolites as catalysts in oil refining.

    PubMed

    Primo, Ana; Garcia, Hermenegildo

    2014-11-21

    Oil is nowadays the main energy source and this prevalent position most probably will continue in the next decades. This situation is largely due to the degree of maturity that has been achieved in oil refining and petrochemistry as a consequence of the large effort in research and innovation. The remarkable efficiency of oil refining is largely based on the use of zeolites as catalysts. The use of zeolites as catalysts in refining and petrochemistry has been considered as one of the major accomplishments in the chemistry of the XXth century. In this tutorial review, the introductory part describes the main features of zeolites in connection with their use as solid acids. The main body of the review describes important refining processes in which zeolites are used including light naphtha isomerization, olefin alkylation, reforming, cracking and hydrocracking. The final section contains our view on future developments in the field such as the increase in the quality of the transportation fuels and the coprocessing of increasing percentage of biofuels together with oil streams. This review is intended to provide the rudiments of zeolite science applied to refining catalysis.

  14. Multidataset Refinement Resonant Diffraction, and Magnetic Structures

    PubMed Central

    Attfield, J. Paul

    2004-01-01

    The scope of Rietveld and other powder diffraction refinements continues to expand, driven by improvements in instrumentation, methodology and software. This will be illustrated by examples from our research in recent years. Multidataset refinement is now commonplace; the datasets may be from different detectors, e.g., in a time-of-flight experiment, or from separate experiments, such as at several x-ray energies giving resonant information. The complementary use of x rays and neutrons is exemplified by a recent combined refinement of the monoclinic superstructure of magnetite, Fe3O4, below the 122 K Verwey transition, which reveals evidence for Fe2+/Fe3+ charge ordering. Powder neutron diffraction data continue to be used for the solution and Rietveld refinement of magnetic structures. Time-of-flight instruments on cold neutron sources can produce data that have a high intensity and good resolution at high d-spacings. Such profiles have been used to study incommensurate magnetic structures such as FeAsO4 and β–CrPO4. A multiphase, multidataset refinement of the phase-separated perovskite (Pr0.35Y0.07Th0.04Ca0.04Sr0.5)MnO3 has been used to fit three components with different crystal and magnetic structures at low temperatures. PMID:27366599

  15. Software for Refining or Coarsening Computational Grids

    NASA Technical Reports Server (NTRS)

    Daines, Russell; Woods, Jody

    2003-01-01

    A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.

  16. Adaptive wall technology for minimization of wall interferences in transonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Wolf, Stephen W. D.

    1988-01-01

    Modern experimental techniques to improve free air simulations in transonic wind tunnels by use of adaptive wall technology are reviewed. Considered are the significant advantages of adaptive wall testing techniques with respect to wall interferences, Reynolds number, tunnel drive power, and flow quality. The application of these testing techniques relies on making the test section boundaries adjustable and using a rapid wall adjustment procedure. A historical overview shows how the disjointed development of these testing techniques, since 1938, is closely linked to available computer support. An overview of Adaptive Wall Test Section (AWTS) designs shows a preference for use of relatively simple designs with solid adaptive walls in 2- and 3-D testing. Operational aspects of AWTS's are discussed with regard to production type operation where adaptive wall adjustments need to be quick. Both 2- and 3-D data are presented to illustrate the quality of AWTS data over the transonic speed range. Adaptive wall technology is available for general use in 2-D testing, even in cryogenic wind tunnels. In 3-D testing, more refinement of the adaptive wall testing techniques is required before more widespread use can be planned.

  17. Residual Distribution Schemes for Conservation Laws Via Adaptive Quadrature

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Abgrall, Remi; Biegel, Bryan (Technical Monitor)

    2000-01-01

    This paper considers a family of nonconservative numerical discretizations for conservation laws which retains the correct weak solution behavior in the limit of mesh refinement whenever sufficient order numerical quadrature is used. Our analysis of 2-D discretizations in nonconservative form follows the 1-D analysis of Hou and Le Floch. For a specific family of nonconservative discretizations, it is shown under mild assumptions that the error arising from non-conservation is strictly smaller than the discretization error in the scheme. In the limit of mesh refinement under the same assumptions, solutions are shown to satisfy an entropy inequality. Using results from this analysis, a variant of the "N" (Narrow) residual distribution scheme of van der Weide and Deconinck is developed for first-order systems of conservation laws. The modified form of the N-scheme supplants the usual exact single-state mean-value linearization of flux divergence, typically used for the Euler equations of gasdynamics, by an equivalent integral form on simplex interiors. This integral form is then numerically approximated using an adaptive quadrature procedure. This renders the scheme nonconservative in the sense described earlier so that correct weak solutions are still obtained in the limit of mesh refinement. Consequently, we then show that the modified form of the N-scheme can be easily applied to general (non-simplicial) element shapes and general systems of first-order conservation laws equipped with an entropy inequality where exact mean-value linearization of the flux divergence is not readily obtained, e.g. magnetohydrodynamics, the Euler equations with certain forms of chemistry, etc. Numerical examples of subsonic, transonic and supersonic flows containing discontinuities together with multi-level mesh refinement are provided to verify the analysis.

  18. Psychometric Function Reconstruction from Adaptive Tracking Procedures

    DTIC Science & Technology

    1988-11-29

    reduced variability and length of the track can be shown by the use of the "sweat factor" defined by Taylor and Creelman (1967). This is a measure of...Psychophysics, 35, 385-392. Taylor, M. M., and Creelman , C. D. (1967). PEST: Efficient estimates on probability functions. Journal of the Acoustical Society of

  19. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... carried out at each location. (2) Crude oil capacity. (i) The total corporate crude oil capacity of each... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION...

  20. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... January 1, 1999; and the type of business activities carried out at each location; or (ii) In the case...

  1. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... January 1, 1999; and the type of business activities carried out at each location; or (ii) In the case...

  2. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...

  3. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...

  4. 40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...

  5. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...

  6. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...

  7. 40 CFR 80.235 - How does a refiner obtain approval as a small refiner?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...

  8. FEM electrode refinement for electrical impedance tomography.

    PubMed

    Grychtol, Bartlomiej; Adler, Andy

    2013-01-01

    Electrical Impedance Tomography (EIT) reconstructs images of electrical tissue properties within a body from electrical transfer impedance measurements at surface electrodes. Reconstruction of EIT images requires the solution of an inverse problem in soft field tomography, where a sensitivity matrix, J, of the relationship between internal changes and measurements is calculated, and then a pseudo-inverse of J is used to update the image estimate. It is therefore clear that a precise calculation of J is required for solution accuracy. Since it is generally not possible to use analytic solutions, the finite element method (FEM) is typically used. It has generally been recommended in the EIT literature that FEMs be refined near electrodes, since the electric field and sensitivity is largest there. In this paper we analyze the accuracy requirement for FEM refinement near electrodes in EIT and describe a technique to refine arbitrary FEMs.

  9. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  10. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  11. Metal decontamination for waste minimization using liquid metal refining technology

    SciTech Connect

    Joyce, E.L. Jr.; Lally, B.; Ozturk, B.; Fruehan, R.J.

    1993-09-01

    The current Department of Energy Mixed Waste Treatment Project flowsheet indicates that no conventional technology, other than surface decontamination, exists for metal processing. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain concentration. This project is in support of the National Mixed Low Level Waste Treatment Program. Because of the high cost of disposal, it is important to develop an effective decontamination and volume reduction method for low-level contaminated metals. It is important to be able to decontaminate complex shapes where surfaces are hidden or inaccessible to surface decontamination processes and destruction of organic contamination. These goals can be achieved by adapting commercial metal refining processes to handle radioactive and organic contaminated metal. The radioactive components are concentrated in the slag, which is subsequently vitrified; hazardous organics are destroyed by the intense heat of the bath. The metal, after having been melted and purified, could be recycled for use within the DOE complex. In this project, we evaluated current state-of-the-art technologies for metal refining, with special reference to the removal of radioactive contaminants and the destruction of hazardous organics. This evaluation was based on literature reports, industrial experience, plant visits, thermodynamic calculations, and engineering aspects of the various processes. The key issues addressed included radioactive partitioning between the metal and slag phases, minimization of secondary wastes, operability of the process subject to widely varying feed chemistry, and the ability to seal the candidate process to prevent the release of hazardous species.

  12. Using supercritical fluids to refine hydrocarbons

    DOEpatents

    Yarbro, Stephen Lee

    2015-06-09

    A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.

  13. Parabolic Refined Invariants and Macdonald Polynomials

    NASA Astrophysics Data System (ADS)

    Chuang, Wu-yen; Diaconescu, Duiliu-Emanuel; Donagi, Ron; Pantev, Tony

    2015-05-01

    A string theoretic derivation is given for the conjecture of Hausel, Letellier and Rodriguez-Villegas on the cohomology of character varieties with marked points. Their formula is identified with a refined BPS expansion in the stable pair theory of a local root stack, generalizing previous work of the first two authors in collaboration with Pan. Haiman's geometric construction for Macdonald polynomials is shown to emerge naturally in this context via geometric engineering. In particular this yields a new conjectural relation between Macdonald polynomials and refined local orbifold curve counting invariants. The string theoretic approach also leads to a new spectral cover construction for parabolic Higgs bundles in terms of holomorphic symplectic orbifolds.

  14. California refining: It's all or nothing, now

    SciTech Connect

    Not Available

    1991-07-18

    The State of California has a budget deficit of more than US $14-billion, stringent and costly environmental protection laws, and a giant fiercely competitive market for high-quality gasoline. This issue of Energy Detente examines some of the emerging consequences of this dramatic combination for petroleum refining. This issue also presents the following: (1) the ED Refining Netback Data Series for the US Gulf and West Coasts, Rotterdam, and Singapore as of July 12, 1991; and (2) the ED Fuel Price/Tax Series for countries of the Western Hemisphere, July 1991 edition. 8 figs., 6 tabs.

  15. Algorithm Refinement for Stochastic Partial Differential Equations. I. Linear Diffusion

    NASA Astrophysics Data System (ADS)

    Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.

    2002-10-01

    A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.

  16. Strengthening and grain refinement in an Al-6061 metal matrix composite through intense plastic straining

    SciTech Connect

    Valiev, R.Z.; Islamgaliev, R.K.; Kuzmina, N.F.; Li, Y.; Langdon, T.G.

    1998-12-04

    Intense plastic straining techniques such as torsion straining and equal channel angular (ECA) pressing are processing procedures which may be used to make beneficial changes in the properties of materials through a substantial refinement in the microstructure. Although intense plastic straining procedures have been used for grain refinement in numerous experiments reported over the last decade, there appears to have been no investigations in which these procedures were used with metal matrix composites. The present paper describes a series of experiments in which torsion straining and ECA pressing were applied to an Al-6061 metal matrix composite reinforced with 10 volume % of Al{sub 2}O{sub 3} particulates. As will be demonstrated, intense plastic straining has the potential for both reducing the grain size of the composite to the submicrometer level and increasing the strength at room temperature by a factor in the range of {approximately}2 to {approximately}3.

  17. Refinement of the urine concentration test in rats.

    PubMed

    Kulick, Lisa J; Clemons, Donna J; Hall, Robert L; Koch, Michael A

    2005-01-01

    The urine concentration test is a potentially stressful procedure used to assess renal function. Historically, animals have been deprived of water for 24 h or longer during this test, creating the potential for distress. Refinement of the technique to lessen distress may involve decreasing the water-deprivation period. To determine the feasibility of reduced water-deprivation time, 10 male and 10 female rats were food- and water-deprived for 22 h. Clinical condition and body weights were recorded, and urine was collected every 2 h, beginning 16 h after the onset of food and water deprivation. All rats lost weight (P < 0.001). All rats were clinically normal after 16 h, but 90% of the males and 30% of the females appeared clinically dehydrated after 22 h. After 16 h, mean urine specific gravities were 1.040 and 1.054 for males and females, respectively, and mean urine osmolalities were 1,362 and 2,080 mOsm/kg, respectively, indicating the rats were adequately concentrating urine. The rats in this study tolerated water deprivation relatively well for 16 h but showed clinical signs of dehydration after 22 h. Based on this study, it was concluded that the urine concentration test can be refined such that rats are not deprived of water for more than 16 h without jeopardizing test results.

  18. One technique for refining the global Earth gravity models

    NASA Astrophysics Data System (ADS)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  19. Grain Refining and Microstructural Modification during Solidification.

    DTIC Science & Technology

    1984-10-01

    and 100 ml of distilled water (called etchant A) for 5 to 15 seconds. The others were etched with aqua regia (called etchant B) for 10 to 25 seconds... reverse lide It noceoav. aid IduntIty by block um-bet) Grain refining, microstructure, solidification, phase diagrams, electromagnetic stirring, Cu-Fe

  20. Theory of a refined earth model

    NASA Technical Reports Server (NTRS)

    Krause, H. G. L.

    1968-01-01

    Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.

  1. Refining the Eye: Dermatology and Visual Literacy

    ERIC Educational Resources Information Center

    Zimmermann, Corinne; Huang, Jennifer T.; Buzney, Elizabeth A.

    2016-01-01

    In 2014 the Museum of Fine Arts Boston and Harvard Medical School began a partnership focused on building visual literacy skills for dermatology residents in the Harvard Combined Dermatology Residency Program. "Refining the Eye: Art and Dermatology", a four session workshop, took place in the museum's galleries and utilized the Visual…

  2. Refiners respond to strategic driving forces

    SciTech Connect

    Gonzalez, R.G.

    1996-05-01

    Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.

  3. Energy Bandwidth for Petroleum Refining Processes

    SciTech Connect

    none,

    2006-10-01

    The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.

  4. Laser Vacuum Furnace for Zone Refining

    NASA Technical Reports Server (NTRS)

    Griner, D. B.; Zurburg, F. W.; Penn, W. M.

    1986-01-01

    Laser beam scanned to produce moving melt zone. Experimental laser vacuum furnace scans crystalline wafer with high-power CO2-laser beam to generate precise melt zone with precise control of temperature gradients around zone. Intended for zone refining of silicon or other semiconductors in low gravity, apparatus used in normal gravity.

  5. Solidification Based Grain Refinement in Steels

    DTIC Science & Technology

    2009-07-24

    Steelmaking. Vol. 33,pp. 292-300, 2005. 13. Alvarez, P.: Lesch. C: Bleck, W.; Petitgand. H.: Schottler. J.; Sevillano, J. Gil ., "Grain refinement...34 Metallurgical Transactions, vol. 1, pp 1987-1995 (1970). 7. Villars , P., Pauling File, 1995, http://crystdb.nims.go.jp/, (2 March, 2009). 8

  6. Refining aggregate exposure: example using parabens.

    PubMed

    Cowan-Ellsberry, Christina E; Robison, Steven H

    2009-12-01

    The need to understand and estimate quantitatively the aggregate exposure to ingredients used broadly in a variety of product types continues to grow. Currently aggregate exposure is most commonly estimated by using a very simplistic approach of adding or summing the exposures from all the individual product types in which the chemical is used. However, the more broadly the ingredient is used in related consumer products, the more likely this summation will result in an unrealistic estimate of exposure because individuals in the population vary in their patterns of product use including co-use and non-use. Furthermore the ingredient may not be used in all products of a given type. An approach is described for refining this aggregate exposure using data on (1) co-use and non-use patterns of product use, (2) extent of products in which the ingredient is used and (3) dermal penetration and metabolism. This approach and the relative refinement in the aggregate exposure from incorporating these data is illustrated using methyl, n-propyl, n-butyl and ethyl parabens, the most widely used preservative system in personal care and cosmetic products. When these refining factors were used, the aggregate exposure compared to the simple addition approach was reduced by 51%, 58%, 90% and 92% for methyl, n-propyl, n-butyl and ethyl parabens, respectively. Since biomonitoring integrates all sources and routes of exposure, the estimates using this approach were compared to available paraben biomonitoring data. Comparison to the 95th percentile of these data showed that these refined estimates were still conservative by factors of 2-92. All of our refined estimates of aggregate exposure are less than the ADI of 10mg/kg/day for parabens.

  7. Refining industry trends: Europe and surroundings

    SciTech Connect

    Guariguata, U.G.

    1997-05-01

    The European refining industry, along with its counterparts, is struggling with low profitability due to excess primary and conversion capacity, high operating costs and impending decisions of stringent environmental regulations that will require significant investments with hard to justify returns. This region was also faced in the early 1980s with excess capacity on the order of 4 MMb/d and satisfying the {open_quotes}at that point{close_quotes} demand by operating at very low utilization rates (60%). As was the case in the US, the rebalancing of the capacity led to the closure of some 51 refineries. Since the early 1990s, the increase in demand growth has essentially balanced the capacity threshold and utilization rates are settled around the 90% range. During the last two decades, the major oil companies have reduced their presence in the European refining sector, giving some state oil companies and producing countries the opportunity to gain access to the consumer market through the purchase of refining capacity in various countries-specifically, Kuwait in Italy; Libya and Venezuela in Germany; and Norway in other areas of Scandinavia. Although the market share for this new cast of characters remains small (4%) relative to participation by the majors (35%), their involvement in the European refining business set the foundation whereby US independent refiners relinquished control over assets that could not be operated profitably as part of a previous vertically integrated structure, unless access to the crude was ensured. The passage of time still seems to render this model valid.

  8. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  9. Curved mesh generation and mesh refinement using Lagrangian solid mechanics

    SciTech Connect

    Persson, P.-O.; Peraire, J.

    2008-12-31

    We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.

  10. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  11. wARP: improvement and extension of crystallographic phases by weighted averaging of multiple-refined dummy atomic models.

    PubMed

    Perrakis, A; Sixma, T K; Wilson, K S; Lamzin, V S

    1997-07-01

    wARP is a procedure that substantially improves crystallographic phases (and subsequently electron-density maps) as an additional step after density-modification methods such as solvent flattening and averaging. The initial phase set is used to create a number of dummy atom models which are subjected to least-squares or maximum-likelihood refinement and iterative model updating in an automated refinement procedure (ARP). Averaging of the phase sets calculated from the refined output models and weighting of structure factors by their similarity to an average vector results in a phase set that improves and extends the initial phases substantially. An important requirement is that the native data have a maximum resolution beyond approximately 2.4 A. The wARP procedure shortens the time-consuming step of model building in crystallographic structure determination and helps to prevent the introduction of errors.

  12. X-ray structure refinement using aspherical atomic density functions obtained from quantum-mechanical calculations.

    PubMed

    Jayatilaka, Dylan; Dittrich, Birger

    2008-05-01

    An approach is outlined for X-ray structure refinement using atomic density fragments obtained by Hirshfeld partitioning of quantum-mechanical density fragments. Results are presented for crystal structure refinements of urea and benzene using these 'Hirshfeld atoms'. Using this procedure, the quantum-mechanical non-spherical electron density is taken into account in the structural model based on the conformation found in the crystal. Contrary to current consensus in structure refinement, the anisotropic displacement parameters of H atoms can be reproduced from neutron diffraction measurements simply from a least-squares fit using the Hirshfeld atoms derived from the BLYP level of theory and including a simple point-charge model to treat the crystal environment.

  13. Rodent laparoscopy: refinement for rodent drug studies and model development, and monitoring of neoplastic, inflammatory and metabolic diseases.

    PubMed

    Baran, Szczepan W; Perret-Gentil, Marcel I; Johnson, Elizabeth J; Miedel, Emily L; Kehler, James

    2011-10-01

    The refinement of surgical techniques represents a key opportunity to improve the welfare of laboratory rodents, while meeting legal and ethical obligations. Current methods used for monitoring intra-abdominal disease progression in rodents usually involve euthanasia at various time-points for end of study, one-time individual tissue collections. Most rodent organ tumour models are developed by the introduction of tumour cells via laparotomy or via ultrasound-guided indirect visualization. Ischaemic rodent models are often generated using laparotomies. This approach requires a high number of rodents, and in some instances introduces high degrees of morbidity and mortality, thereby increasing study variability and expense. Most importantly, most laparotomies do not promote the highest level of rodent welfare. Recent improvements in laparoscopic equipment and techniques have enabled the adaptation of laparoscopy for rodent procedures. Laparoscopy, which is considered the gold standard for many human abdominal procedures, allows for serial biopsy collections from the same animal, results in decreased pain and tissue trauma as well as quicker postsurgical recovery, and preserves immune function in comparison to the same procedures performed by laparotomy. Laparoscopy improves rodent welfare, decreases inter-animal variability, thereby reducing the number of required animals, allows for the replacement of larger species, decreases expense and improves data yield. This review article compares rodent laparotomy and laparoscopic surgical methods, and describes the utilization of laparoscopy for the development of cancer models and assessment of disease progression to improve data collection and animal welfare. In addition, currently available rodent laparoscopic equipment and instrumentation are presented.

  14. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    SciTech Connect

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  15. Developmental refinement of cortical systems for speech and voice processing.

    PubMed

    Bonte, Milene; Ley, Anke; Scharke, Wolfgang; Formisano, Elia

    2016-03-01

    Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task.

  16. Crystallization in lactose refining-a review.

    PubMed

    Wong, Shin Yee; Hartel, Richard W

    2014-03-01

    In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry.

  17. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  18. The Refinement of Multi-Agent Systems

    NASA Astrophysics Data System (ADS)

    Aştefănoaei, L.; de Boer, F. S.

    This chapter introduces an encompassing theory of refinement which supports a top-down methodology for designing multi-agent systems. We present a general modelling framework where we identify different abstraction levels of BDI agents. On the one hand, at a higher level of abstraction we introduce the language BUnity as a way to specify “what” an agent can do. On the other hand, at a more concrete layer we introduce the language BUpL as implementing not only what an agent can do but also “when” the agent can do. At this stage of individual agent design, refinement is understood as trace inclusion. Having the traces of an implementation included in the traces of a given specification means that the implementation is correct with respect to the specification.

  19. Automata Learning with Automated Alphabet Abstraction Refinement

    NASA Astrophysics Data System (ADS)

    Howar, Falk; Steffen, Bernhard; Merten, Maik

    on is the key when learning behavioral models of realistic systems, but also the cause of a major problem: the introduction of non-determinism. In this paper, we introduce a method for refining a given abstraction to automatically regain a deterministic behavior on-the-fly during the learning process. Thus the control over abstraction becomes part of the learning process, with the effect that detected non-determinism does not lead to failure, but to a dynamic alphabet abstraction refinement. Like automata learning itself, this method in general is neither sound nor complete, but it also enjoys similar convergence properties even for infinite systems as long as the concrete system itself behaves deterministically, as illustrated along a concrete example.

  20. Using supercritical fluids to refine hydrocarbons

    SciTech Connect

    Yarbro, Stephen Lee

    2014-11-25

    This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.

  1. The indirect electrochemical refining of lunar ores

    NASA Technical Reports Server (NTRS)

    Semkow, Krystyna W.; Sammells, Anthony F.

    1987-01-01

    Recent work performed on an electrolytic cell is reported which addresses the implicit limitations in various approaches to refining lunar ores. The cell uses an oxygen vacancy conducting stabilized zirconia solid electrolyte to effect separation between a molten salt catholyte compartment where alkali metals are deposited, and an oxygen-evolving anode of composition La(0.89)Sr(0.1)MnO3. The cell configuration is shown and discussed along with a polarization curve and a steady-state current-voltage curve. In a practical cell, cathodically deposited liquid lithium would be continuously removed from the electrolytic cell and used as a valuable reducing agent for ore refining under lunar conditions. Oxygen would be indirectly electrochemically extracted from lunar ores for breathing purposes.

  2. Adapted Canoeing for the Handicapped.

    ERIC Educational Resources Information Center

    Frith, Greg H.; Warren, L. D.

    1984-01-01

    Safety as well as instructional recommendations are offered for adapting canoeing as a recreationial activity for handicapped students. Major steps of the instructional program feature orientation to the water and canoe, entry and exit techinques, and mobility procedures. (CL)

  3. Substance abuse in the refining industry

    SciTech Connect

    Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )

    1989-01-01

    In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.

  4. Using Induction to Refine Information Retrieval Strategies

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Pell, Barney; Kedar, Smadar

    1994-01-01

    Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

  5. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  6. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  7. A space–angle DGFEM approach for the Boltzmann radiation transport equation with local angular refinement

    SciTech Connect

    Kópházi, József Lathouwers, Danny

    2015-09-15

    In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.

  8. An adaptive level set method

    SciTech Connect

    Milne, Roger Brent

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  9. Prism Adaptation in Schizophrenia

    ERIC Educational Resources Information Center

    Bigelow, Nirav O.; Turner, Beth M.; Andreasen, Nancy C.; Paulsen, Jane S.; O'Leary, Daniel S.; Ho, Beng-Choon

    2006-01-01

    The prism adaptation test examines procedural learning (PL) in which performance facilitation occurs with practice on tasks without the need for conscious awareness. Dynamic interactions between frontostriatal cortices, basal ganglia, and the cerebellum have been shown to play key roles in PL. Disruptions within these neural networks have also…

  10. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  11. Grain Refinement of Permanent Mold Cast Copper Base Alloys

    SciTech Connect

    M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo

    2005-04-01

    Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.

  12. Dental Procedures.

    PubMed

    Ramponi, Denise R

    2016-01-01

    Dental problems are a common complaint in emergency departments in the United States. There are a wide variety of dental issues addressed in emergency department visits such as dental caries, loose teeth, dental trauma, gingival infections, and dry socket syndrome. Review of the most common dental blocks and dental procedures will allow the practitioner the opportunity to make the patient more comfortable and reduce the amount of analgesia the patient will need upon discharge. Familiarity with the dental equipment, tooth, and mouth anatomy will help prepare the practitioner for to perform these dental procedures.

  13. Procedures Used in Adjusting the Field Studies Sample.

    ERIC Educational Resources Information Center

    Bertram, Charles L.; And Others

    This paper describes procedures used to stratify and refine an original sample of 951 families with preschool children living in the Appalachia area, for a study to provide information on the target audience for Appalachia Educational Laboratory's Home-Oriented Preschool Education Program (HOPE). The procedure was to force-match the sample…

  14. Refined Monte Carlo method for simulating angle-dependent partial frequency redistributions

    NASA Technical Reports Server (NTRS)

    Lee, J.-S.

    1982-01-01

    A refined algorithm for generating emission frequencies from angle-dependent partial frequency redistribution functions R sub II and R sub III is described. The improved algorithm has as its basis a 'rejection' technique that, for absorption frequencies x less than 5, involves no approximations. The resulting procedure is found to be essential for effective studies of radiative transfer in optically thick or temperature varying media involving angle-dependent partial frequency redistributions.

  15. The blind leading the blind: Mutual refinement of approximate theories

    NASA Technical Reports Server (NTRS)

    Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa

    1991-01-01

    The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.

  16. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  17. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  18. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  19. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  20. 48 CFR 208.7304 - Refined precious metals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....

  1. Visual Adaptation

    PubMed Central

    Webster, Michael A.

    2015-01-01

    Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985

  2. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  3. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  4. Refining the asteroid taxonomy by polarimetric observations

    NASA Astrophysics Data System (ADS)

    Belskaya, I. N.; Fornasier, S.; Tozzi, G. P.; Gil-Hutton, R.; Cellino, A.; Antonyuk, K.; Krugly, Yu. N.; Dovgopol, A. N.; Faggi, S.

    2017-03-01

    We present new results of polarimetric observations of 15 main belt asteroids of different composition. By merging new and published data we determined polarimetric parameters characterizing individual asteroids and mean values of the same parameters characterizing different taxonomic classes. The majority of asteroids show polarimetric phase curves close to the average curve of the corresponding class. We show that using polarimetric data it is possible to refine asteroid taxonomy and derive a polarimetric classification for 283 main belt asteroids. Polarimetric observations of asteroid (21) Lutetia are found to exhibit possible variations of the position angle of the polarization plane over the surface.

  5. Formal language theory: refining the Chomsky hierarchy.

    PubMed

    Jäger, Gerhard; Rogers, James

    2012-07-19

    The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages).

  6. WASP-41b: Refined Physical Properties

    NASA Astrophysics Data System (ADS)

    Vaňko, M.; Pribulla, T.; Tan, T. G.; Parimucha, Š.; Evans, P.; Mašek, M.

    2015-07-01

    We present the first follow-up study of the transiting system WASP-41 after its discovery in 2011. Our main goal is to refine the physical parameters of the system and to search for possible signs of transit timing variations. The observations used for the analysis were taken from the public archive Exoplanet Transit Database (ETD). The Safronov number and equilibrium temperature of WASP-41b indicate that it belongs to the so-called Class I. No transit timing variations (TTV) were detected.

  7. Refinement Of Hexahedral Cells In Euler Flow Computations

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1996-01-01

    Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.

  8. Adaptive Management

    EPA Science Inventory

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...

  9. Free kick instead of cross-validation in maximum-likelihood refinement of macromolecular crystal structures

    SciTech Connect

    Pražnikar, Jure; Turk, Dušan

    2014-12-01

    The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.

  10. Increased delignification by white rot fungi after pressure refining Miscanthus.

    PubMed

    Baker, Paul W; Charlton, Adam; Hale, Mike D C

    2015-01-01

    Pressure refining, a pulp making process to separate fibres of lignocellulosic materials, deposits lignin granules on the surface of the fibres that could enable increased access to lignin degrading enzymes. Three different white rot fungi were grown on pressure refined (at 6 bar and 8 bar) and milled Miscanthus. Growth after 28 days showed highest biomass losses on milled Miscanthus compared to pressure refined Miscanthus. Ceriporiopsis subvermispora caused a significantly higher proportion of lignin removal when grown on 6 bar pressure refined Miscanthus compared to growth on 8 bar pressure refined Miscanthus and milled Miscanthus. RM22b followed a similar trend but Phlebiopsis gigantea SPLog6 did not. Conversely, C. subvermispora growing on pressure refined Miscanthus revealed that the proportion of cellulose increased. These results show that two of the three white rot fungi used in this study showed higher delignification on pressure refined Miscanthus than milled Miscanthus.

  11. Rapid Glass Refiner Development Program, Final report

    SciTech Connect

    1995-02-20

    A rapid glass refiner (RGR) technology which could be applied to both conventional and advanced class melting systems would significantly enhance the productivity and the competitiveness of the glass industry in the United States. Therefore, Vortec Corporation, with the support of the US Department of Energy (US DOE) under Cooperative Agreement No. DE-FC07-90ID12911, conducted a research and development program for a unique and innovative approach to rapid glass refining. To provide focus for this research effort, container glass was the primary target from among the principal glass types based on its market size and potential for significant energy savings. Container glass products represent the largest segment of the total glass industry accounting for 60% of the tonnage produced and over 40% of the annual energy consumption of 232 trillion Btu/yr. Projections of energy consumption and the market penetration of advanced melting and fining into the container glass industry yield a potential energy savings of 7.9 trillion Btu/yr by the year 2020.

  12. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  13. Molecular refinement of gibbon genome rearrangements.

    PubMed

    Roberto, Roberta; Capozzi, Oronzo; Wilson, Richard K; Mardis, Elaine R; Lomiento, Mariana; Tuzun, Eray; Cheng, Ze; Mootnick, Alan R; Archidiacono, Nicoletta; Rocchi, Mariano; Eichler, Evan E

    2007-02-01

    The gibbon karyotype is known to be extensively rearranged when compared to the human and to the ancestral primate karyotype. By combining a bioinformatics (paired-end sequence analysis) approach and a molecular cytogenetics approach, we have refined the synteny block arrangement of the white-cheeked gibbon (Nomascus leucogenys, NLE) with respect to the human genome. We provide the first detailed clone framework map of the gibbon genome and refine the location of 86 evolutionary breakpoints to <1 Mb resolution. An additional 12 breakpoints, mapping primarily to centromeric and telomeric regions, were mapped to approximately 5 Mb resolution. Our combined FISH and BES analysis indicates that we have effectively subcloned 49 of these breakpoints within NLE gibbon BAC clones, mapped to a median resolution of 79.7 kb. Interestingly, many of the intervals associated with translocations were gene-rich, including some genes associated with normal skeletal development. Comparisons of NLE breakpoints with those of other gibbon species reveal variability in the position, suggesting that chromosomal rearrangement has been a longstanding property of this particular ape lineage. Our data emphasize the synergistic effect of combining computational genomics and cytogenetics and provide a framework for ultimate sequence and assembly of the gibbon genome.

  14. Adaptation to blur

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Webster, Shernaaz M.; MacDonald, Jennifer; Bahradwadj, Shrikant R.

    2001-06-01

    Blur is an intrinsic property of the retinal image that can vary substantially in natural viewing. We examined how processes of contrast adaptation might adjust the visual system to regulate the perception of blur. Observers viewed a blurred or sharpened image for 2-5 minutes, and then judged the apparent focus of a series of 0.5-sec test images interleaved with 6-sec of readaptation. A 2AFC staircase procedure was used to vary the amplitude spectrum of successive test to find the image that appeared in focus. Adapting to a blurred image causes a physically focused image to appear too sharp. Opposite after-effects occur for sharpened adapting images. Pronounced biases were observed over a wide range of magnitudes of adapting blur, and were similar for different types of blur. After-effects were also similar for different classes of images but were generally weaker when the adapting and test stimuli were different images, showing that the adaptation is not adjusting simply to blur per se. These adaptive adjustments may strongly influence the perception of blur in normal vision and how it changes with refractive errors.

  15. Refinement and evaluation of the Massachusetts firm-yield estimator model version 2.0

    USGS Publications Warehouse

    Levin, Sara B.; Archfield, Stacey A.; Massey, Andrew J.

    2011-01-01

    The firm yield is the maximum average daily withdrawal that can be extracted from a reservoir without risk of failure during an extended drought period. Previously developed procedures for determining the firm yield of a reservoir were refined and applied to 38 reservoir systems in Massachusetts, including 25 single- and multiple-reservoir systems that were examined during previous studies and 13 additional reservoir systems. Changes to the firm-yield model include refinements to the simulation methods and input data, as well as the addition of several scenario-testing capabilities. The simulation procedure was adapted to run at a daily time step over a 44-year simulation period, and daily streamflow and meteorological data were compiled for all the reservoirs for input to the model. Another change to the model-simulation methods is the adjustment of the scaling factor used in estimating groundwater contributions to the reservoir. The scaling factor is used to convert the daily groundwater-flow rate into a volume by multiplying the rate by the length of reservoir shoreline that is hydrologically connected to the aquifer. Previous firm-yield analyses used a constant scaling factor that was estimated from the reservoir surface area at full pool. The use of a constant scaling factor caused groundwater flows during periods when the reservoir stage was very low to be overestimated. The constant groundwater scaling factor used in previous analyses was replaced with a variable scaling factor that is based on daily reservoir stage. This change reduced instability in the groundwater-flow algorithms and produced more realistic groundwater-flow contributions during periods of low storage. Uncertainty in the firm-yield model arises from many sources, including errors in input data. The sensitivity of the model to uncertainty in streamflow input data and uncertainty in the stage-storage relation was examined. A series of Monte Carlo simulations were performed on 22 reservoirs

  16. A Hybrid Segmentation Framework for Computer-Assisted Dental Procedures

    NASA Astrophysics Data System (ADS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza; Reza Asharif, Mohammad

    Teeth segmentation in computed tomography (CT) images is a major and challenging task for various computer assisted procedures. In this paper, we introduced a hybrid method for quantification of teeth in CT volumetric dataset inspired by our previous experiences and anatomical knowledge of teeth and jaws. In this regard, we propose a novel segmentation technique using an adaptive thresholding, morphological operations, panoramic re-sampling and variational level set algorithm. The proposed method consists of several steps as follows: first, we determine the operation region in CT slices. Second, the bony tissues are separated from other tissues by utilizing an adaptive thresholding technique based on the 3D pulses coupled neural networks (PCNN). Third, teeth tissue is classified from other bony tissues by employing panorex lines and anatomical knowledge of teeth in the jaws. In this case, the panorex lines are estimated using Otsu thresholding and mathematical morphology operators. Then, the proposed method is followed by calculating the orthogonal lines corresponding to panorex lines and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the integral projections of the panoramic dataset. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a variational level set to refine initial teeth boundaries to final contour. In the last step a surface rendering algorithm known as marching cubes (MC) is applied to volumetric visualization. The proposed algorithm was evaluated in the presence of 30 cases. Segmented images were compared with manually outlined contours. We compared the performance of segmentation method using ROC analysis of the thresholding, watershed and our previous works. The proposed method performed best. Also, our algorithm has the advantage of high speed compared to our previous works.

  17. Efficient Two-Step Procedures for Locating Transition States of Surface Reactions.

    PubMed

    Nikodem, Astrid; Matveev, Alexei V; Zheng, Bo-Xiao; Rösch, Notker

    2013-01-08

    Using various two-step strategies, we examined how to accurately locate transition states (TS) of reactions using the example of eight reactions at metal surfaces with 14-33 moving atoms. These procedures combined four path-finding methods for locating approximate TS structures (nudged elastic band, standard string, climbing image string, and searching string, using a conjugate gradient or a modified steepest-descent method for optimization and two types of coordinate systems) with subsequent local refinement by two dimer methods. The dimer-Lanczos variant designed for this study required on average 20% fewer gradient calls than the standard dimer method. During the path finding phase, using mixed instead of Cartesian coordinates reduced the numbers of gradient calls on average by an additional 21%, while using a modified steepest-descent method improved that key efficiency criterion on average by 13%. For problematic cases we suggest strategies especially adapted to the problem at hand.

  18. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  19. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  20. Inferential Aspects of Adaptive Allocation Rules.

    ERIC Educational Resources Information Center

    Berry, Donald A.

    In clinical trials, adaptive allocation means that the therapies assigned to the next patient or patients depend on the results obtained thus far in the trial. Although many adaptive allocation procedures have been proposed for clinical trials, few have actually used adaptive assignment, largely because classical frequentist measures of inference…

  1. Refining mimicry: phenotypic variation tracks the local optimum.

    PubMed

    Mérot, Claire; Le Poul, Yann; Théry, Marc; Joron, Mathieu

    2016-07-01

    Müllerian mimicry between chemically defended preys is a textbook example of natural selection favouring phenotypic convergence onto a shared warning signal. Studies of mimicry have concentrated on deciphering the ecological and genetic underpinnings of dramatic switches in mimicry association, producing a well-known mosaic distribution of mimicry patterns across geography. However, little is known about the accuracy of resemblance between natural comimics when the local phenotypic optimum varies. In this study, using analyses of wing shape, pattern and hue, we quantify multimodal phenotypic similarity between butterfly comimics sharing the so-called postman pattern in different localities with varying species composition. We show that subtle but consistent variation between populations of the localized species, Heliconius timareta thelxinoe, enhance resemblance to the abundant comimics which drive the mimicry in each locality. Those results suggest that rarer comimics track the changes in the phenotypic optimum caused by gradual changes in the composition of the mimicry community, providing insights into the process by which intraspecific diversity of mimetic pattern may arise. Furthermore, our results suggest a multimodal evolution of similarity, with coordinated convergence in different features of the phenotype such as wing outline, pattern and hue. Finally, multilocus genotyping allows estimating local hybridization rates between H. timareta and comimic H. melpomene in different populations, raising the hypothesis that mimicry refinement between closely related comimics may be enhanced by adaptive introgression at loci modifying the accuracy of resemblance.

  2. Optimization method for electron beam melting and refining of metals

    NASA Astrophysics Data System (ADS)

    Donchev, Veliko; Vutova, Katia

    2014-03-01

    Pure metals and special alloys obtained by electron beam melting and refining (EBMR) in vacuum, using electron beams as a heating source, have a lot of applications in nuclear and airspace industries, electronics, medicine, etc. An analytical optimization problem for the EBMR process based on mathematical heat model is proposed. The used criterion is integral functional minimization of a partial derivative of the temperature in the metal sample. The investigated technological parameters are the electron beam power, beam radius, the metal casting velocity, etc. The optimization problem is discretized using a non-stationary heat model and corresponding adapted Pismen-Rekford numerical scheme, developed by us and multidimensional trapezional rule. Thus a discrete optimization problem is built where the criterion is a function of technological process parameters. The discrete optimization problem is heuristically solved by cluster optimization method. Corresponding software for the optimization task is developed. The proposed optimization scheme can be applied for quality improvement of the pure metals (Ta, Ti, Cu, etc.) produced by the modern and ecological-friendly EBMR process.

  3. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function.

    PubMed

    Prill, Dragica; Juhás, Pavol; Billinge, Simon J L; Schmidt, Martin U

    2016-01-01

    A method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  4. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    DOE PAGES

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; ...

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may bemore » used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.« less

  5. Towards solution and refinement of organic crystal structures by fitting to the atomic pair distribution function

    SciTech Connect

    Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.; Schmidt, Martin U.

    2016-01-01

    In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.

  6. Improvements to local projective noise reduction through higher order and multiscale refinements

    NASA Astrophysics Data System (ADS)

    Moore, Jack Murdoch; Small, Michael; Karrech, Ali

    2015-06-01

    The broad spectrum characteristic of signals from nonlinear systems obstructs noise reduction techniques developed for linear systems. Local projection was developed to reduce noise while preserving nonlinear deterministic structures, and a second order refinement to local projection which was proposed ten years ago does so particularly effectively. It involves adjusting the origin of the projection subspace to better accommodate the geometry of the attractor. This paper describes an analytic motivation for the enhancement from which follows further higher order and multiple scale refinements. However, the established enhancement is frequently as or more effective than the new filters arising from solely geometric considerations. Investigation of the way that measurement errors reinforce or cancel throughout the refined local projection procedure explains the special efficacy of the existing enhancement, and leads to a new second order refinement offering widespread gains. Different local projective filters are found to be best suited to different noise levels. At low noise levels, the optimal order increases as noise increases. At intermediate levels second order tends to be optimal, while at high noise levels prototypical local projection is most effective. The new higher order filters perform better relative to established filters for longer signals or signals corresponding to higher dimensional attractors.

  7. Proving refinement transformations for deriving high-assurance software

    SciTech Connect

    Winter, V.L.; Boyle, J.M.

    1996-05-01

    The construction of a high-assurance system requires some evidence, ideally a proof, that the system as implemented will behave as required. Direct proofs of implementations do not scale up well as systems become more complex and therefore are of limited value. In recent years, refinement-based approaches have been investigated as a means to manage the complexity inherent in the verification process. In a refinement-based approach, a high-level specification is converted into an implementation through a number of refinement steps. The hope is that the proofs of the individual refinement steps will be easier than a direct proof of the implementation. However, if stepwise refinement is performed manually, the number of steps is severely limited, implying that the size of each step is large. If refinement steps are large, then proofs of their correctness will not be much easier than a direct proof of the implementation. The authors describe an approach to refinement-based software development that is based on automatic application of refinements, expressed as program transformations. This automation has the desirable effect that the refinement steps can be extremely small and, thus, easy to prove correct. They give an overview of the TAMPR transformation system that the use for automated refinement. They then focus on some aspects of the semantic framework that they have been developing to enable proofs that TAMPR transformations are correctness preserving. With this framework, proofs of correctness for transformations can be obtained with the assistance of an automated reasoning system.

  8. Level 5: user refinement to aid the fusion process

    NASA Astrophysics Data System (ADS)

    Blasch, Erik P.; Plano, Susan

    2003-04-01

    The revised JDL Fusion model Level 4 process refinement covers a broad spectrum of actions such as sensor management and control. A limitation of Level 4 is the purpose of control - whether it be for user needs or system operation. Level 5, User Refinement, is a modification to the Revised JDL model that distinguishes between machine process refinement and user refinement. User refinement can either be human control actions or refinement of the user's cognitive model. In many cases, fusion research concentrates on the machine and does not take full advantage of the human as not only a qualified expert to refine the fusion process, but also as customer for whom the fusion system is designed. Without user refinement, sensor fusion is incomplete, inadequate, and the user neglects its worthiness. To capture user capabilities, we explore the concept of user refinement through decision and action based on situational leadership models. We develop a Fuse-Act Situational User Refinement (FASUR) model that details four refinement behaviors: Neglect, Consult, Rely, and Interact and five refinement functions: Planning, Organizing, Coordinating, Directing, and Controlling. Process refinement varies for different systems and different user information needs. By designing a fusion system with a specific user in mind, vis Level 5, a fusion architecture can meet user's information needs for varying situations, extend user sensing capabilities for action, and increase the human-machine interaction.

  9. Seasat orbit refinement for altimetry application

    NASA Astrophysics Data System (ADS)

    Mohan, S. N.; Hamata, N. E.; Stavert, R. L.; Bierman, G. J.

    1980-12-01

    This paper describes the use of stochastic differential correction models in refining the Seasat orbit based on post-flight analysis of tracking data. The objective is to obtain orbital-height precision that is commensurate with the inherent Seasat altimetry data precision level of 10 cms. Local corrections to a mean ballistic arc, perturbed principally by atmospheric drag variations and local gravitational anomalies, are obtained by the introduction of stochastic dynamical models in conjunction with optimal estimation/smoothing techniques. Assessment of the resulting orbit with 'ground truth' provided by Seasat altimetry data shows that the orbital height precision is improved by 32% when compared to a conventional least-squares solution using the same data set. The orbital height precision realized by employing stochastic differential correction models is in the range of 73 cms to 208 cms rms.

  10. Formal language theory: refining the Chomsky hierarchy

    PubMed Central

    Jäger, Gerhard; Rogers, James

    2012-01-01

    The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632

  11. Error bounds from extra precise iterative refinement

    SciTech Connect

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  12. Technical Considerations for Filler and Neuromodulator Refinements

    PubMed Central

    Wilson, Anthony J.; Chang, Brian L.; Percec, Ivona

    2016-01-01

    Background: The toolbox for cosmetic practitioners is growing at an unprecedented rate. There are novel products every year and expanding off-label indications for neurotoxin and soft-tissue filler applications. Consequently, aesthetic physicians are increasingly challenged by the task of selecting the most appropriate products and techniques to achieve optimal patient outcomes. Methods: We employed a PubMed literature search of facial injectables from the past 10 years (2005–2015), with emphasis on those articles embracing evidence-based medicine. We evaluated the scientific background of every product and the physicochemical properties that make each one ideal for specific indications. The 2 senior authors provide commentary regarding their clinical experience with specific technical refinements of neuromodulators and soft-tissue fillers. Results: Neurotoxins and fillers are characterized by unique physical characteristics that distinguish each product. This results in subtle but important differences in their clinical applications. Specific indications and recommendations for the use of the various neurotoxins and soft-tissue fillers are reviewed. The discussion highlights refinements in combination treatments and product physical modifications, according to specific treatment zones. Conclusions: The field of facial aesthetics has evolved dramatically, mostly secondary to our increased understanding of 3-dimensional structural volume restoration. Our work reviews Food and Drug Administration–approved injectables. In addition, we describe how to modify products to fulfill specific indications such as treatment of the mid face, décolletage, hands, and periorbital regions. Although we cannot directly evaluate the duration or exact physical properties of blended products, we argue that “product customization” is safe and provides natural results with excellent patient outcomes. PMID:28018778

  13. Essays on refining markets and environmental policy

    NASA Astrophysics Data System (ADS)

    Oladunjoye, Olusegun Akintunde

    This thesis is comprised of three essays. The first two essays examine empirically the relationship between crude oil price and wholesale gasoline prices in the U.S. petroleum refining industry while the third essay determines the optimal combination of emissions tax and environmental research and development (ER&D) subsidy when firms organize ER&D either competitively or as a research joint venture (RJV). In the first essay, we estimate an error correction model to determine the effects of market structure on the speed of adjustment of wholesale gasoline prices, to crude oil price changes. The results indicate that market structure does not have a strong effect on the dynamics of price adjustment in the three regional markets examined. In the second essay, we allow for inventories to affect the relationship between crude oil and wholesale gasoline prices by allowing them to affect the probability of regime change in a Markov-switching model of the refining margin. We find that low gasoline inventory increases the probability of switching from the low margin regime to the high margin regime and also increases the probability of staying in the high margin regime. This is consistent with the predictions of the competitive storage theory. In the third essay, we extend the Industrial Organization R&D theory to the determination of optimal environmental policies. We find that RJV is socially desirable. In comparison to competitive ER&D, we suggest that regulators should encourage RJV with a lower emissions tax and higher subsidy as these will lead to the coordination of ER&D activities and eliminate duplication of efforts while firms internalize their technological spillover externality.

  14. Application of the Refined Zigzag Theory to the Modeling of Delaminations in Laminated Composites

    NASA Technical Reports Server (NTRS)

    Groh, Rainer M. J.; Weaver, Paul M.; Tessler, Alexander

    2015-01-01

    The Refined Zigzag Theory is applied to the modeling of delaminations in laminated composites. The commonly used cohesive zone approach is adapted for use within a continuum mechanics model, and then used to predict the onset and propagation of delamination in five cross-ply composite beams. The resin-rich area between individual composite plies is modeled explicitly using thin, discrete layers with isotropic material properties. A damage model is applied to these resin-rich layers to enable tracking of delamination propagation. The displacement jump across the damaged interfacial resin layer is captured using the zigzag function of the Refined Zigzag Theory. The overall model predicts the initiation of delamination to within 8% compared to experimental results and the load drop after propagation is represented accurately.

  15. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  16. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    SciTech Connect

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; Alder, Berni J.

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examples highlighting the mesh adaptivity of this method are also provided.

  17. A goal-oriented adaptive finite-element approach for plane wave 3-D electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi

    2013-08-01

    We have developed a novel goal-oriented adaptive mesh refinement approach for finite-element methods to model plane wave electromagnetic (EM) fields in 3-D earth models based on the electric field differential equation. To handle complicated models of arbitrary conductivity, magnetic permeability and dielectric permittivity involving curved boundaries and surface topography, we employ an unstructured grid approach. The electric field is approximated by linear curl-conforming shape functions which guarantee the divergence-free condition of the electric field within each tetrahedron and continuity of the tangential component of the electric field across the interior boundaries. Based on the non-zero residuals of the approximated electric field and the yet to be satisfied boundary conditions of continuity of both the normal component of the total current density and the tangential component of the magnetic field strength across the interior interfaces, three a-posterior error estimators are proposed as a means to drive the goal-oriented adaptive refinement procedure. The first a-posterior error estimator relies on a combination of the residual of the electric field, the discontinuity of the normal component of the total current density and the discontinuity of the tangential component of the magnetic field strength across the interior faces shared by tetrahedra. The second a-posterior error estimator is expressed in terms of the discontinuity of the normal component of the total current density (conduction plus displacement current). The discontinuity of the tangential component of the magnetic field forms the third a-posterior error estimator. Analytical solutions for magnetotelluric (MT) and radiomagnetotelluric (RMT) fields impinging on a homogeneous half-space model are used to test the performances of the newly developed goal-oriented algorithms using the above three a-posterior error estimators. A trapezoidal topographical model, using normally incident EM waves

  18. Procedural knowledge

    NASA Technical Reports Server (NTRS)

    Georgeff, Michael P.; Lansky, Amy L.

    1986-01-01

    Much of commonsense knowledge about the real world is in the form of procedures or sequences of actions for achieving particular goals. In this paper, a formalism is presented for representing such knowledge using the notion of process. A declarative semantics for the representation is given, which allows a user to state facts about the effects of doing things in the problem domain of interest. An operational semantics is also provided, which shows how this knowledge can be used to achieve particular goals or to form intentions regarding their achievement. Given both semantics, the formalism additionally serves as an executable specification language suitable for constructing complex systems. A system based on this formalism is described, and examples involving control of an autonomous robot and fault diagnosis for NASA's Space Shuttle are provided.

  19. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  20. Refining waste hardmetals into tungsten oxide nanosheets via facile method

    NASA Astrophysics Data System (ADS)

    Li, Zhifei; Zheng, Guangwei; Wang, Jinshu; Li, Hongyi; Wu, Junshu; Du, Yucheng

    2016-04-01

    A new hydrothermal system has been designed to recycle waste WC-Co hardmetal with low cobalt (Co) content (3 %). In the solution system, nitric acid was designed to dissolve Co, H2O2 served as oxidant to accelerate the oxidation of the WC-Co hardmetals, and fluorine (F-) was designed to dissolve and recrystallize generated tungsten oxides, which were found to possess a layered structure using scanning electron microscopy and transmission electron microscopy. The obtained tungsten oxides were identified as WO3·0.33H2O by X-ray diffraction and their specific surface area was measured as 89.2 m2 g-1 via N2 adsorption-desorption techniques. The present layered structure tungsten oxides exhibited a promising capability for removing lead ion (Pb2+) and organic species, such as methyl blue. The adsorption model was found to be in agreement with Langmuir isotherm model. Given the facile synthesis procedure and promising properties of final products, this new approach should have great potential for refining some other waste hardmetals or tungsten products.

  1. Recognition of related proteins by iterative template refinement (ITR).

    PubMed Central

    Yi, T. M.; Lander, E. S.

    1994-01-01

    Predicting the structural fold of a protein is an important and challenging problem. Available computer programs for determining whether a protein sequence is compatible with a known 3-dimensional structure fall into 2 categories: (1) structure-based methods, in which structural features such as local conformation and solvent accessibility are encoded in a template, and (2) sequence-based methods, in which aligned sequences of a set of related proteins are encoded in a template. In both cases, the programs use a static template based on a predetermined set of proteins. Here, we describe a computer-based method, called iterative template refinement (ITR), that uses templates combining structure-based and sequence-based information and employs an iterative search procedure to detect related proteins and sequentially add them to the templates. Starting from a single protein of known structure, ITR performs sequential cycles of database search to construct an expanding tree of templates with the aim of identifying subtle relationships among proteins. Evaluating the performance of ITR on 6 proteins, we found that the method automatically identified a variety of subtle structural similarities to other proteins. For example, the method identified structural similarity between arabinose-binding protein and phosphofructokinase, a relationship that has not been widely recognized. PMID:7987226

  2. On macromolecular refinement at subatomic resolution withinteratomic scatterers

    SciTech Connect

    Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre

    2007-11-09

    A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality with results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.

  3. Adaptive SPECT

    PubMed Central

    Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.

    2008-01-01

    Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485

  4. Improved ligand geometries in crystallographic refinement using AFITT in PHENIX

    PubMed Central

    Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; Case, David A.; York, Darrin M.; Adams, Paul D.; Warren, Gregory L.

    2016-01-01

    Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows. PHENIX–AFITT refinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentially difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows. PHENIX–AFITT refinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combining AFITT and the PHENIX software suite on a data set of 189 protein–ligand PDB structures are presented. Refinements using PHENIX–AFITT significantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. For the data presented, PHENIX–AFITT refinements result in more chemically accurate models for small-molecule ligands. PMID:27599738

  5. New Process for Grain Refinement of Aluminum. Final Report

    SciTech Connect

    Dr. Joseph A. Megy

    2000-09-22

    A new method of grain refining aluminum involving in-situ formation of boride nuclei in molten aluminum just prior to casting has been developed in the subject DOE program over the last thirty months by a team consisting of JDC, Inc., Alcoa Technical Center, GRAS, Inc., Touchstone Labs, and GKS Engineering Services. The Manufacturing process to make boron trichloride for grain refining is much simpler than preparing conventional grain refiners, with attendant environmental, capital, and energy savings. The manufacture of boride grain refining nuclei using the fy-Gem process avoids clusters, salt and oxide inclusions that cause quality problems in aluminum today.

  6. Refiners react to changes in the pipeline infrastructure

    SciTech Connect

    Giles, K.A.

    1997-06-01

    Petroleum pipelines have long been a critical component in the distribution of crude and refined products in the U.S. Pipelines are typically the most cost efficient mode of transportation for reasonably consistent flow rates. For obvious reasons, inland refineries and consumers are much more dependent on petroleum pipelines to provide supplies of crude and refined products than refineries and consumers located on the coasts. Significant changes in U.S. distribution patterns for crude and refined products are reshaping the pipeline infrastructure and presenting challenges and opportunities for domestic refiners. These changes are discussed.

  7. Solution adaptive grids applied to low Reynolds number flow

    NASA Astrophysics Data System (ADS)

    de With, G.; Holdø, A. E.; Huld, T. A.

    2003-08-01

    A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.

  8. An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)

    DTIC Science & Technology

    2011-04-13

    Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different

  9. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  10. Real-time optimal adaptation for planetary geometry and texture: 4-8 tile hierarchies.

    PubMed

    Hwa, Lok M; Duchaineau, Mark A; Joy, Kenneth I

    2005-01-01

    The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the Realtime Optimally Adapting Meshes (ROAM) Algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.

  11. Application of Sequential Interval Estimation to Adaptive Mastery Testing

    ERIC Educational Resources Information Center

    Chang, Yuan-chin Ivan

    2005-01-01

    In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…

  12. Helicopter flight dynamics simulation with refined aerodynamic modeling

    NASA Astrophysics Data System (ADS)

    Theodore, Colin Rhys

    This dissertation describes the development of a coupled rotor-fuselage flight dynamic simulation that includes a maneuvering free wake model and a coupled flap-lag-torsion flexible blade representation. This mathematical model is used to investigate effects of main rotor inflow and blade modeling on various flight dynamics characteristics for both articulated and hingeless rotor helicopters. The inclusion of the free wake model requires the development of new numerical procedures for the calculation of trim equilibrium positions, for the extraction of high-order, constant coefficient linearized models, and for the calculation of the free flight responses to arbitrary pilot inputs. The free wake model, previously developed by other investigators at the University of Maryland, is capable of modeling the changes in rotor wake geometry resulting from maneuvers, and the effects of such changes on the main rotor inflow. The overall flight dynamic model is capable of simulating the helicopter behavior during maneuvers that can be arbitrarily large. The combination of sophisticated models of rotor wake and blade flexibility enables the flight dynamics model to capture the effects of maneuvers with unprecedented accuracy for simulations based on first principles: this is the main contribution of the research presented in this dissertation. The increased accuracy brought about by the free wake model significantly improves the predictions of the helicopter trim state for both helicopter configurations considered in this study. This is especially true in low speed flight and hover. The most significant improvements are seen in the predictions of the main rotor collective and power required by the rotor, which can be significantly underpredicted using traditional linear inflow models. Results show that the free-flight on-axis responses to pilot inputs can be predicted with good accuracy with a relatively unsophisticated models that do not include either a free wake nor a

  13. Climate adaptation

    NASA Astrophysics Data System (ADS)

    Kinzig, Ann P.

    2015-03-01

    This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.

  14. Iterative build OMIT maps: Map improvement by iterative model-building and refinement without model bias

    SciTech Connect

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.

    2008-02-12

    A procedure for carrying out iterative model-building, density modification and refinement is presented in which the density in an OMIT region is essentially unbiased by an atomic model. Density from a set of overlapping OMIT regions can be combined to create a composite 'Iterative-Build' OMIT map that is everywhere unbiased by an atomic model but also everywhere benefiting from the model-based information present elsewhere in the unit cell. The procedure may have applications in the validation of specific features in atomic models as well as in overall model validation. The procedure is demonstrated with a molecular replacement structure and with an experimentally-phased structure, and a variation on the method is demonstrated by removing model bias from a structure from the Protein Data Bank.

  15. Refined seismic stratigraphy in prograding carbonates

    SciTech Connect

    Pomar, L. )

    1991-03-01

    Complete exposure of the upper Miocene Reef Complex in the sea cliffs of Mallorca (Spain) allows for a more refined interpretation of seismic lines with similar progradational patterns. A 6 km long high-resolution cross section in the direction of reef progradation displays four hierarchical orders of accretional units. Although all these units are of higher order, they all exhibit similar characteristics as a third order depositional sequence and can likewise be interpreted as the result of high order sea-level cycles. The accretional units are composed of lagoonal horizontal beds, reefal sigmoids and gently dipping slope deposits. They are bounded by erosion surfaces at the top and basinwards by their correlative conformities. These architectural patterns are similar to progradational sequences seen on seismic lines. On seismic lines, the progradational pattern often shows the following geometrical details: (1) discontinuous climbing high-energy reflectors, (2) truncation of clinoforms by these high-energy reflectors with seaward dips, (3) transparent areas intercalated between clinoforms. Based on facies distribution in the outcrops of Mallorca the high-energy reflectors are interpreted as sectors where the erosion surfaces truncated the reef wall and are overlain by lagoonal sediments deposited during the following sealevel rise. The more transparent zones seem to correspond with areas of superposition of undifferentiated lagoonal beds. Offlapping geometries can also be detected in highest quality seismic lines. The comparison between seismic and outcrop data provides a more accurate prediction of lithologies, facies distribution, and reservoir properties on seismic profiles.

  16. Reitveld refinement study of PLZT ceramics

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Bavbande, D. V.; Mishra, R.; Bafna, V. H.; Mohan, D.; Kothiyal, G. P.

    2013-02-01

    PLZT ceramics of composition Pb0.93La0.07(Zr0.60Ti0.40)O3, have been milled for 6hrs and 24hrs were prepared by solid state synthesis route. The 6hrs milled and 24hrs milled samples are represented as PLZT-6 and PLZT-24 ceramics respectively. X-ray diffraction (XRD) pattern was recorded at room temperature. The XRD pattern has been analyzed by employing Rietveld refinement method. Phase identification shows that all the peaks observed in PLZT-6 and PLZT-24 ceramics could be indexed to P4mm space group with tetragonal symmetry. The unit cell parameters of 6hrs milled PLZT ceramics are found to be a=b=4.0781(5)Å and c=4.0938(7)Å and for 24hrs milled PLZT ceramics unit cell parameters are a=b=4.0679(4)Å and c=4.1010(5)Å . The axial ratio c/a and unit cell volume of PLZT-6 are 1.0038 and 68.09(2)Å3 respectively. In PLZT-24 samples, the axial ratio c/a value is 1.0080 which is little more than that of the 6hr milled PLZT sample whereas the unit cell volume decrease to 67.88 (1) Å3. An average crystallite size was estimated by using Scherrer's formula. Dielectric properties were obtained by measuring the capacitance and tand loss using Stanford LCR meter.

  17. Refining and blending of aviation turbine fuels.

    PubMed

    White, R D

    1999-02-01

    Aviation turbine fuels (jet fuels) are similar to other petroleum products that have a boiling range of approximately 300F to 550F. Kerosene and No.1 grades of fuel oil, diesel fuel, and gas turbine oil share many similar physical and chemical properties with jet fuel. The similarity among these products should allow toxicology data on one material to be extrapolated to the others. Refineries in the USA manufacture jet fuel to meet industry standard specifications. Civilian aircraft primarily use Jet A or Jet A-1 fuel as defined by ASTM D 1655. Military aircraft use JP-5 or JP-8 fuel as defined by MIL-T-5624R or MIL-T-83133D respectively. The freezing point and flash point are the principle differences between the finished fuels. Common refinery processes that produce jet fuel include distillation, caustic treatment, hydrotreating, and hydrocracking. Each of these refining processes may be the final step to produce jet fuel. Sometimes blending of two or more of these refinery process streams are needed to produce jet fuel that meets the desired specifications. Chemical additives allowed for use in jet fuel are also defined in the product specifications. In many cases, the customer rather than the refinery will put additives into the fuel to meet their specific storage or flight condition requirements.

  18. Astrocytes refine cortical connectivity at dendritic spines

    PubMed Central

    Risher, W Christopher; Patel, Sagar; Kim, Il Hwan; Uezu, Akiyoshi; Bhagat, Srishti; Wilton, Daniel K; Pilaz, Louis-Jan; Singh Alvarado, Jonnathan; Calhan, Osman Y; Silver, Debra L; Stevens, Beth; Calakos, Nicole; Soderling, Scott H; Eroglu, Cagla

    2014-01-01

    During cortical synaptic development, thalamic axons must establish synaptic connections despite the presence of the more abundant intracortical projections. How thalamocortical synapses are formed and maintained in this competitive environment is unknown. Here, we show that astrocyte-secreted protein hevin is required for normal thalamocortical synaptic connectivity in the mouse cortex. Absence of hevin results in a profound, long-lasting reduction in thalamocortical synapses accompanied by a transient increase in intracortical excitatory connections. Three-dimensional reconstructions of cortical neurons from serial section electron microscopy (ssEM) revealed that, during early postnatal development, dendritic spines often receive multiple excitatory inputs. Immuno-EM and confocal analyses revealed that majority of the spines with multiple excitatory contacts (SMECs) receive simultaneous thalamic and cortical inputs. Proportion of SMECs diminishes as the brain develops, but SMECs remain abundant in Hevin-null mice. These findings reveal that, through secretion of hevin, astrocytes control an important developmental synaptic refinement process at dendritic spines. DOI: http://dx.doi.org/10.7554/eLife.04047.001 PMID:25517933

  19. Steel refining with an electrochemical cell

    DOEpatents

    Blander, M.; Cook, G.M.

    1988-05-17

    Apparatus is described for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom. 2 figs.

  20. Steel refining with an electrochemical cell

    DOEpatents

    Blander, Milton; Cook, Glenn M.

    1988-01-01

    Apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.

  1. Steel refining with an electrochemical cell

    DOEpatents

    Blander, M.; Cook, G.M.

    1985-05-21

    Disclosed is an apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.

  2. Refined Pichia pastoris reference genome sequence.

    PubMed

    Sturmberger, Lukas; Chappell, Thomas; Geier, Martina; Krainer, Florian; Day, Kasey J; Vide, Ursa; Trstenjak, Sara; Schiefer, Anja; Richardson, Toby; Soriaga, Leah; Darnhofer, Barbara; Birner-Gruenberger, Ruth; Glick, Benjamin S; Tolstorukov, Ilya; Cregg, James; Madden, Knut; Glieder, Anton

    2016-10-10

    Strains of the species Komagataella phaffii are the most frequently used "Pichia pastoris" strains employed for recombinant protein production as well as studies on peroxisome biogenesis, autophagy and secretory pathway analyses. Genome sequencing of several different P. pastoris strains has provided the foundation for understanding these cellular functions in recent genomics, transcriptomics and proteomics experiments. This experimentation has identified mistakes, gaps and incorrectly annotated open reading frames in the previously published draft genome sequences. Here, a refined reference genome is presented, generated with genome and transcriptome sequencing data from multiple P. pastoris strains. Twelve major sequence gaps from 20 to 6000 base pairs were closed and 5111 out of 5256 putative open reading frames were manually curated and confirmed by RNA-seq and published LC-MS/MS data, including the addition of new open reading frames (ORFs) and a reduction in the number of spliced genes from 797 to 571. One chromosomal fragment of 76kbp between two previous gaps on chromosome 1 and another 134kbp fragment at the end of chromosome 4, as well as several shorter fragments needed re-orientation. In total more than 500 positions in the genome have been corrected. This reference genome is presented with new chromosomal numbering, positioning ribosomal repeats at the distal ends of the four chromosomes, and includes predicted chromosomal centromeres as well as the sequence of two linear cytoplasmic plasmids of 13.1 and 9.5kbp found in some strains of P. pastoris.

  3. Spatially Refined Aerosol Direct Radiative Forcing Efficiencies

    NASA Technical Reports Server (NTRS)

    Henze, Daven K.; Shindell, Drew Todd; Akhtar, Farhan; Spurr, Robert J. D.; Pinder, Robert W.; Loughlin, Dan; Kopacz, Monika; Singh, Kumaresh; Shim, Changsub

    2012-01-01

    Global aerosol direct radiative forcing (DRF) is an important metric for assessing potential climate impacts of future emissions changes. However, the radiative consequences of emissions perturbations are not readily quantified nor well understood at the level of detail necessary to assess realistic policy options. To address this challenge, here we show how adjoint model sensitivities can be used to provide highly spatially resolved estimates of the DRF from emissions of black carbon (BC), primary organic carbon (OC), sulfur dioxide (SO2), and ammonia (NH3), using the example of emissions from each sector and country following multiple Representative Concentration Pathway (RCPs). The radiative forcing efficiencies of many individual emissions are found to differ considerably from regional or sectoral averages for NH3, SO2 from the power sector, and BC from domestic, industrial, transportation and biomass burning sources. Consequently, the amount of emissions controls required to attain a specific DRF varies at intracontinental scales by up to a factor of 4. These results thus demonstrate both a need and means for incorporating spatially refined aerosol DRF into analysis of future emissions scenario and design of air quality and climate change mitigation policies.

  4. 40 CFR 80.128 - Alternative agreed upon procedures for refiners and importers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... documents designation for consistency with the time and place, and compliance model designations for the..., and simple or complex model certified); and (3) Trace back to the batch or batches in which the... that it conducted physical inspections of the downstream blending operation during the period...

  5. 40 CFR 80.133 - Agreed-upon procedures for refiners and importers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... tender and the compliance model designations for the tender (VOC-controlled for Region 1 or 2, non VOC-controlled, and simple or complex model certified). (f) Reformulated gasoline batches. Select a sample, in... physical inspections of the downstream blending operation during the period oxygenate was blended;...

  6. Islet transplantation in type 1 diabetes: ongoing challenges, refined procedures, and long-term outcome.

    PubMed

    Shapiro, A M James

    2012-01-01

    Remarkable progress has been made in islet transplantation over a span of 40 years. Once just an experimental curiosity in mice, this therapy has moved forward, and can now provide robust therapy for highly selected patients with type 1 diabetes (T1D), refractory to stabilization by other means. This progress could not have occurred without extensive dynamic international collaboration. Currently, 1,085 patients have undergone islet transplantation at 40 international sites since the Edmonton Protocol was reported in 2000 (752 allografts, 333 autografts), according to the Collaborative Islet Transplant Registry. The long-term results of islet transplantation in selected centers now match registry data of pancreas-alone transplantation, with 6 sites reporting five-year insulin independence rates ≥50%. Islet transplantation has been criticized for the use of multiple donor pancreas organs, but progress has also occurred in single-donor success, with 10 sites reporting increased single-donor engraftment. The next wave of innovative clinical trial interventions will address instant blood-mediated inflammatory reaction (IBMIR), apoptosis, and inflammation, and will translate into further marked improvements in single-donor success. Effective control of auto- and alloimmunity is the key to long-term islet function, and high-resolution cellular and antibody-based assays will add considerable precision to this process. Advances in immunosuppression, with new antibody-based targeting of costimulatory blockade and other T-B cellular signaling, will have further profound impact on the safety record of immunotherapy. Clinical trials will move forward shortly to test out new human stem cell derived islets, and in parallel trials will move forward, testing pig islets for compatibility in patients. Induction of immunological tolerance to self-islet antigens and to allografts is a difficult challenge, but potentially within our grasp.

  7. Refining and End Use Study of Coal Liquids

    SciTech Connect

    1997-10-01

    This report summarizes revisions to the design basis for the linear programing refining model that is being used in the Refining and End Use Study of Coal Liquids. This revision primarily reflects the addition of data for the upgrading of direct coal liquids.

  8. Optimization of Refining Craft for Vegetable Insulating Oil

    NASA Astrophysics Data System (ADS)

    Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan

    2016-05-01

    Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.

  9. An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere

    DOE PAGES

    McCorquodale, Peter; Ullrich, Paul; Johansen, Hans; ...

    2015-09-04

    We present a high-order finite-volume approach for solving the shallow-water equations on the sphere, using multiblock grids on the cubed-sphere. This approach combines a Runge--Kutta time discretization with a fourth-order accurate spatial discretization, and includes adaptive mesh refinement and refinement in time. Results of tests show fourth-order convergence for the shallow-water equations as well as for advection in a highly deformational flow. Hierarchical adaptive mesh refinement allows solution error to be achieved that is comparable to that obtained with uniform resolution of the most refined level of the hierarchy, but with many fewer operations.

  10. Refining primary lead by granulation-leaching-electrowinning

    NASA Astrophysics Data System (ADS)

    Ojebuoboh, F.; Wang, S.; Maccagni, M.

    2003-04-01

    This article describes the development of a new process in which lead bullion obtained from smelting concentrates is refined by leaching-electrowinning. In the last half century, the challenge to treat and refine lead in order to minimize emissions of lead and lead compounds has intensified. Within the primary lead industry, the treatment aspect has transformed from the sinter-blast furnace model to direct smelting, creating gains in hygiene, environmental control, and efficiency. The refining aspect has remained based on kettle refining, or to a lesser extent, the Betts electrolytic refining. In the mid-1990s, Asarco investigated a concept based on granulating the lead bullion from the blast furnace. The granular material was fed into the Engitec Fluobor process. This work resulted in the operation of a 45 kg/d pilot plant that could produce lead sheets of 99.9% purity.

  11. A stable interface element scheme for the p-adaptive lifting collocation penalty formulation

    NASA Astrophysics Data System (ADS)

    Cagnone, J. S.; Nadarajah, S. K.

    2012-02-01

    This paper presents a procedure for adaptive polynomial refinement in the context of the lifting collocation penalty (LCP) formulation. The LCP scheme is a high-order unstructured discretization method unifying the discontinuous Galerkin, spectral volume, and spectral difference schemes in single differential formulation. Due to the differential nature of the scheme, the treatment of inter-cell fluxes for spatially varying polynomial approximations is not straightforward. Specially designed elements are proposed to tackle non-conforming polynomial approximations. These elements are constructed such that a conforming interface between polynomial approximations of different degrees is recovered. The stability and conservation properties of the scheme are analyzed and various inviscid compressible flow calculations are performed to demonstrate the potential of the proposed approach.

  12. Feline onychectomy and elective procedures.

    PubMed

    Young, William Phillip

    2002-05-01

    The development of the carbon dioxide (CO2) surgical laser has given veterinarians a new perspective in the field of surgery. Recently developed techniques and improvisations of established procedures have opened the field of surgery to infinite applications never before dreamed of as little as 10 years ago. Today's CO2 surgical laser is an adaptable, indispensable tool for the everyday veterinary practitioner. Its use is becoming a common occurrence in offices of veterinarians around the world.

  13. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  14. Refined structures of mouse P-glycoprotein

    PubMed Central

    Li, Jingzhi; Jaimes, Kimberly F; Aller, Stephen G

    2014-01-01

    The recently determined C. elegans P-glycoprotein (Pgp) structure revealed significant deviations compared to the original mouse Pgp structure, which suggested possible misinterpretations in the latter model. To address this concern, we generated an experimental electron density map from single-wavelength anomalous dispersion phasing of an original mouse Pgp dataset to 3.8 Å resolution. The map exhibited significantly more detail compared to the original MAD map and revealed several regions of the structure that required de novo model building. The improved drug-free structure was refined to 3.8 Å resolution with a 9.4 and 8.1% decrease in Rwork and Rfree, respectively, (Rwork = 21.2%, Rfree = 26.6%) and a significant improvement in protein geometry. The improved mouse Pgp model contains ∼95% of residues in the favorable Ramachandran region compared to only 57% for the original model. The registry of six transmembrane helices was corrected, revealing amino acid residues involved in drug binding that were previously unrecognized. Registry shifts (rotations and translations) for three transmembrane (TM)4 and TM5 and the addition of three N-terminal residues were necessary, and were validated with new mercury labeling and anomalous Fourier density. The corrected position of TM4, which forms the frame of a portal for drug entry, had backbone atoms shifted >6 Å from their original positions. The drug translocation pathway of mouse Pgp is 96% identical to human Pgp and is enriched in aromatic residues that likely play a collective role in allowing a high degree of polyspecific substrate recognition. PMID:24155053

  15. Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2014-11-01

    Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.

  16. A multivariate conditional model for streamflow prediction and spatial precipitation refinement

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyong; Zhou, Ping; Chen, Xiuzhi; Guan, Yinghui

    2015-10-01

    The effective prediction and estimation of hydrometeorological variables are important for water resources planning and management. In this study, we propose a multivariate conditional model for streamflow prediction and the refinement of spatial precipitation estimates. This model consists of high dimensional vine copulas, conditional bivariate copula simulations, and a quantile-copula function. The vine copula is employed because of its flexibility in modeling the high dimensional joint distribution of multivariate data by building a hierarchy of conditional bivariate copulas. We investigate two cases to evaluate the performance and applicability of the proposed approach. In the first case, we generate one month ahead streamflow forecasts that incorporate multiple predictors including antecedent precipitation and streamflow records in a basin located in South China. The prediction accuracy of the vine-based model is compared with that of traditional data-driven models such as the support vector regression (SVR) and the adaptive neuro-fuzzy inference system (ANFIS). The results indicate that the proposed model produces more skillful forecasts than SVR and ANFIS. Moreover, this probabilistic model yields additional information concerning the predictive uncertainty. The second case involves refining spatial precipitation estimates derived from the tropical rainfall measuring mission precipitationproduct for the Yangtze River basin by incorporating remotely sensed soil moisture data and the observed precipitation from meteorological gauges over the basin. The validation results indicate that the proposed model successfully refines the spatial precipitation estimates. Although this model is tested for specific cases, it can be extended to other hydrometeorological variables for predictions and spatial estimations.

  17. A Comparison of Spectral Element and Finite Difference Methods Using Statically Refined Nonconforming Grids for the MHD Island Coalescence Instability Problem

    NASA Astrophysics Data System (ADS)

    Ng, C. S.; Rosenberg, D.; Pouquet, A.; Germaschewski, K.; Bhattacharjee, A.

    2009-04-01

    A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] is applied to simulate the problem of MHD island coalescence instability (\\ci) in two dimensions. \\ci is a fundamental MHD process that can produce sharp current layers and subsequent reconnection and heating in a high-Lundquist number plasma such as the solar corona [Ng and Bhattacharjee, Phys. Plasmas, 5, 4028 (1998)]. Due to the formation of thin current layers, it is highly desirable to use adaptively or statically refined grids to resolve them, and to maintain accuracy at the same time. The output of the spectral-element static adaptive refinement simulations are compared with simulations using a finite difference method on the same refinement grids, and both methods are compared to pseudo-spectral simulations with uniform grids as baselines. It is shown that with the statically refined grids roughly scaling linearly with effective resolution, spectral element runs can maintain accuracy significantly higher than that of the finite difference runs, in some cases achieving close to full spectral accuracy.

  18. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... preceding January 1, 2000; and the type of business activities carried out at each location; or (ii) In...

  19. 40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner obtain approval as a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... preceding January 1, 2000; and the type of business activities carried out at each location; or (ii) In...

  20. Toothbrush Adaptations.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1987

    1987-01-01

    Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)