Adaptive mesh and algorithm refinement using direct simulation Monte Carlo
Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
An adaptive mesh refinement algorithm for the discrete ordinates method
Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.
1996-03-01
The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.
Incremental refinement of a multi-user-detection algorithm (II)
NASA Astrophysics Data System (ADS)
Vollmer, M.; Götze, J.
2003-05-01
Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
MISR research-aerosol-algorithm refinements for dark water retrievals
NASA Astrophysics Data System (ADS)
Limbacher, J. A.; Kahn, R. A.
2014-11-01
We explore systematically the cumulative effect of many assumptions made in the Multi-angle Imaging SpectroRadiometer (MISR) research aerosol retrieval algorithm with the aim of quantifying the main sources of uncertainty over ocean, and correcting them to the extent possible. A total of 1129 coincident, surface-based sun photometer spectral aerosol optical depth (AOD) measurements are used for validation. Based on comparisons between these data and our baseline case (similar to the MISR standard algorithm, but without the "modified linear mixing" approximation), for 558 nm AOD < 0.10, a high bias of 0.024 is reduced by about one-third when (1) ocean surface under-light is included and the assumed whitecap reflectance at 672 nm is increased, (2) physically based adjustments in particle microphysical properties and mixtures are made, (3) an adaptive pixel selection method is used, (4) spectral reflectance uncertainty is estimated from vicarious calibration, and (5) minor radiometric calibration changes are made for the 672 and 866 nm channels. Applying (6) more stringent cloud screening (setting the maximum fraction not-clear to 0.50) brings all median spectral biases to about 0.01. When all adjustments except more stringent cloud screening are applied, and a modified acceptance criterion is used, the Root-Mean-Square-Error (RMSE) decreases for all wavelengths by 8-27% for the research algorithm relative to the baseline, and is 12-36% lower than the RMSE for the Version 22 MISR standard algorithm (SA, with no adjustments applied). At 558 nm, 87% of AOD data falls within the greater of 0.05 or 20% of validation values; 62% of the 446 nm AOD data, and > 68% of 558, 672, and 866 nm AOD values fall within the greater of 0.03 or 10%. For the Ångström exponent (ANG), 67% of 1119 validation cases for AOD > 0.01 fall within 0.275 of the sun photometer values, compared to 49% for the SA. ANG RMSE decreases by 17% compared to the SA, and the median absolute error drops by
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
NASA Astrophysics Data System (ADS)
Lau, Erin-Ee-Lin; Chung, Wan-Young
A novel RSSI (Received Signal Strength Indication) refinement algorithm is proposed to enhance the resolution for indoor and outdoor real-time location tracking system. The proposed refinement algorithm is implemented in two separate phases. During the first phase, called the pre-processing step, RSSI values at different static locations are collected and processed to build a calibrated model for each reference node. Different measurement campaigns pertinent to each parameter in the model are implemented to analyze the sensitivity of RSSI. The propagation models constructed for each reference nodes are needed by the second phase. During the next phase, called the runtime process, real-time tracking is performed. Smoothing algorithm is proposed to minimize the dynamic fluctuation of radio signal received from each reference node when the mobile target is moving. Filtered RSSI values are converted to distances using formula calibrated in the first phase. Finally, an iterative trilateration algorithm is used for position estimation. Experiments relevant to the optimization algorithm are carried out in both indoor and outdoor environments and the results validated the feasibility of proposed algorithm in reducing the dynamic fluctuation for more accurate position estimation.
Improvement and Refinement of the GPS/MET Data Analysis Algorithm
NASA Technical Reports Server (NTRS)
Herman, Benjamin M.
2003-01-01
The GPS/MET project was a satellite-to-satellite active microwave atmospheric limb sounder using the Global Positioning System transmitters as signal sources. Despite its remarkable success, GPS/MET could not independently sense atmospheric water vapor and ozone. Additionally the GPS/MET data retrieval algorithm needs to be further improved and refined to enhance the retrieval accuracies in the lower tropospheric region and the upper stratospheric region. The objectives of this proposal were to address these 3 problem areas.
Efficient modularity optimization by multistep greedy algorithm and vertex mover refinement.
Schuetz, Philipp; Caflisch, Amedeo
2008-04-01
Identifying strongly connected substructures in large networks provides insight into their coarse-grained organization. Several approaches based on the optimization of a quality function, e.g., the modularity, have been proposed. We present here a multistep extension of the greedy algorithm (MSG) that allows the merging of more than one pair of communities at each iteration step. The essential idea is to prevent the premature condensation into few large communities. Upon convergence of the MSG a simple refinement procedure called "vertex mover" (VM) is used for reassigning vertices to neighboring communities to improve the final modularity value. With an appropriate choice of the step width, the combined MSG-VM algorithm is able to find solutions of higher modularity than those reported previously. The multistep extension does not alter the scaling of computational cost of the greedy algorithm. PMID:18517695
NASA Astrophysics Data System (ADS)
Lloyd, Lewis John
This work focused on developing a novel method for solving the nonlinear partial differential equations associated with thermal-hydraulic safety analysis software. Traditional methods involve solving large systems of nonlinear equations. One class of methods linearizes the nonlinear equations and attempts to minimize the nonlinear truncation error with timestep size selection. These linearized methods are characterized by low computational cost but reduced accuracy. Another class resolves those nonlinearities by using an iterative nonlinear refinement technique. However, these iterative methods are computationally expensive when multiple iterates are required to resolve the nonlinearities. These two paradigms stand at the opposite ends of a spectrum, and the middle ground had yet to be investigated. This research sought to find that middle ground, a balance between the competing incentives of computational cost and accuracy, by creating a hybrid method: a spatially-selective, nonlinear refinement (SNR) algorithm. As part of this work, the two-phase, three-field software COBRA was converted from a linearized semi-implicit solver to a nonlinearly convergent solver; an operator-based scaling that provides a physically meaningful convergence measure was developed and implemented; and the SNR algorithm was developed to enable a subdomain of the simulation to be subjected to multiple nonlinear iterates while maintaining global consistency. By selecting those areas of the computational domain where nonlinearities are expected to be high and subjecting only them to multiple nonlinear iterations, the accuracy of the nonlinear solver may be obtained without its associated computational cost.
NASA Astrophysics Data System (ADS)
Northrup, Scott A.
A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural
2014-01-01
Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073
NASA Astrophysics Data System (ADS)
Shaaban, Khaled M.; Schalkoff, Robert J.
1995-06-01
Most image processing and feature extraction algorithms consist of a composite sequence of operations to achieve a specific task. Overall algorithm capability depends upon the individual performance of each of these operations. This performance, in turn, is usually controlled by a set of a priori known (or estimated) algorithm parameters. The overall design of an image processing algorithm involves both the selections of the sub-algorithm sequence and the required operating parameters, and is done using the best available knowledge of the problem and the experience of the algorithm designer. This paper presents a dynamic and adaptive image processing algorithm development structure. The implementation of the dynamic algorithm structure requires solving of a classification problem at decision nodes in an algorithm graph, A. The number of required classifiers equals the number of decision nodes. There are several learning techniques that could be used to implement any of these classifiers. Each of these techniques, in turn, requires a training set. This training set could be generated using a modified form of the dynamic algorithm. In this modified form, a human operator interface replaces all of the decision nodes. An optimization procedure (Nelder-Mead) is employed to assist the operator in finding the best parameter values. Examples of the approach using real-world imagery are shown.
NASA Astrophysics Data System (ADS)
Li, Lin; Kuai, Xi
2014-11-01
Generating a triangulated irregular network (TIN) from contour maps is the most commonly used approach to build Digital Elevation Models (DEMs) for geo-databases. A well-known problem when building a TIN is that many pan slope triangles (or PSTs) may emerge from the vertices of contour lines. Those triangles should be eliminated from the TIN by adding additional terrain points when refining the local TIN. There are many methods and algorithms available for eliminating PSTs in a TIN, but their performances may not satisfy the requirements of some applications where efficiency rather than completeness is critical. This paper investigates commonly-used processes for eliminating PSTs and puts forward a new algorithm, referred to as ‘dichotomizing' interpolation algorithm, to achieve a higher efficiency than from the conventional ‘skeleton' extraction algorithm. Its better performance comes from reducing the number of the additional interpolated points to only those that are sufficient and necessary for eliminating PSTs. This goal is reached by dichotomizing PST polygons iteratively and locating additional points in the geometric centers of the polygons. This study verifies, both theoretically and experimentally, the higher efficiency of this new dichotomizing algorithm and also demonstrates its reliability for building DEMs in terms of accuracy for estimating terrain surface elevation.
A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics
Puckett, E.G. ); Saltzman, J.S. )
1991-08-12
Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.
Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science
Egger, Jan
2014-01-01
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
A node-centered local refinement algorithm for poisson's equation in complex geometries
McCorquodale, Peter; Colella, Phillip; Grote, David P.; Vay, Jean-Luc
2004-05-04
This paper presents a method for solving Poisson's equation with Dirichlet boundary conditions on an irregular bounded three-dimensional region. The method uses a nodal-point discretization and adaptive mesh refinement (AMR) on Cartesian grids, and the AMR multigrid solver of Almgren. The discrete Laplacian operator at internal boundaries comes from either linear or quadratic (Shortley-Weller) extrapolation, and the two methods are compared. It is shown that either way, solution error is second order in the mesh spacing. Error in the gradient of the solution is first order with linear extrapolation, but second order with Shortley-Weller. Examples are given with comparison with the exact solution. The method is also applied to a heavy-ion fusion accelerator problem, showing the advantage of adaptivity.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Genetic refinement of cloud-masking algorithms for the multi-spectral thermal imager (MTI)
Hirsch, K. L.; Davis, A. B.; Harvey, N. R.; Rohde, C. A.; Brumby, Steven P.
2001-01-01
The Multi-spectral Thermal Imager (MTI) is a high-performance remote-sensing satellite designed, owned and operated by the U.S. Department of Energy, with a dual mission in environmental studies and in nonproliferation. It has enhanced spatial and radiometric resolutions and state-of-the-art calibration capabilities. This instrumental development puts a new burden on retrieval algorithm developers to pass this accuracy on to the inferred geophysical parameters. In particular, the atmospheric correction scheme assumes the intervening atmosphere will be modeled as a plane-parallel horizontally-homogeneous medium. A single dense-enough cloud in view of the ground target can easily offset reality from the calculations, hence the need for a reliable cloud-masking algorithm. Pixel-scale cloud detection relies on the simple facts that clouds are generally whiter, brighter, and colder than the ground below; spatially, dense clouds are generally large on some scale. This is a good basis for searching multispectral datacubes for cloud signatures. However, the resulting cloud mask can be very sensitive to the choice of thresholds in whiteness, brightness, temperature, and connectivity. We have used a genetic algorithm trained on (MODIS Airborne Simulator-based) simulated MTI data to design a cloud-mask. Its performance is compared quantitatively to hand-drawn training data and to the EOS/Terra MODIS cloud mask.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
Refined Upper Tropospheric Water Vapor Retrieval Algorithm for GOES-8 Imagery
NASA Astrophysics Data System (ADS)
Molnar, G. I.; McMillan, W. W.; Lightner, K.; McCourt, M.
2002-05-01
Water vapor is the most important greenhouse gas, yet there is still a large uncertainty how would it affect global climate change. It is not well known, for example, whether global warming will initiate an overall moistening or drying of the tropical upper troposphere. Unfortunately, longer term, reliable observations of upper tropospheric humidity [UTH], which has significant control on outgoing longwave radiation, are few and far between. On one hand, the older radiosonde observations are very unreliable in the upper troposhere. On the other hand, satellite observation based UTH retrievals are still in their infancy. Development of satellite-based UTH retrieval schemes requires reliable "ground truth" measurements and accurate radiative transfer calculations. Here, we extend/update the Soden and Bretherton [1993] UTH-retrieval method for GOES-8 to correspond more accurately with recent radiosonde measurements and using line-by-line radiative transfer calculations to model the satellite-observed radiances. We make use of the high quality UTH profiles obtained during the CAMEX-4 measurement campaign over the Northwestern Caribbean during Aug. 16 - Sept. 24 2001. Co-located GOES-8 6.7 micron and 11 micron channel radiances are then used to fine-tune the satellite-based UTH retrieval algorithm. The satellite radiances are also modeled by using the "KCARTA" line-by-line radiative transfer code developed at UMBC. Finally, we update the GOES-8 UTH-retrieval scheme coefficients to reflect the usage of better "ground truth" and improved radiative transfer calculations, as well as the potential detoriation of the (uncalibrated) satellite radiances.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
An updated partiality model and post-refinement algorithm for XFEL snapshot diffraction data is presented and confirmed by observing anomalous density for S atoms at an X-ray wavelength of 1.3 Å. Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R{sub split} value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will
Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper
ERIC Educational Resources Information Center
Pérez-Leroux, Ana T.
2014-01-01
In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…
Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper
ERIC Educational Resources Information Center
Slabakova, Roumyana
2014-01-01
This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…
Wake Up, It Is 2013! Commentary on Luiz Amaral and Tom Roeper's Article
ERIC Educational Resources Information Center
Muysken, Pieter
2014-01-01
This article examines the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roeper in the present issue and presents a critique of the research that went into the theory. Topics discussed include the allegation that the bilinguals and second language learners in the original article are primarily students in an academic setting, Amaral…
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-01-01
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R split value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases. PMID:26057680
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definitionmore » of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating theRsplitvalue) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases.« less
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data.
Ginn, Helen Mary; Brewster, Aaron S; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M; Sauter, Nicholas K; Sutton, Geoff; Stuart, David Ian
2015-06-01
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R(split) value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases. PMID:26057680
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the
NASA Astrophysics Data System (ADS)
Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.
2015-09-01
The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.
Vellieux, F M
1998-01-01
A comparison has been made of two methods for electron-density map improvement by the introduction of atomicity, namely the iterative skeletonization procedure of the CCP4 program DM [Cowtan & Main (1993). Acta Cryst. D49, 148-157] and the pseudo-atom introduction followed by the refinement protocol in the program suite DEMON/ANGEL [Vellieux, Hunt, Roy & Read (1995). J. Appl. Cryst. 28, 347-351]. Tests carried out using the 3.0 A resolution electron density resulting from iterative 12-fold non-crystallographic symmetry averaging and solvent flattening for the Pseudomonas aeruginosa ornithine transcarbamoylase [Villeret, Tricot, Stalon & Dideberg (1995). Proc. Natl Acad. Sci. USA, 92, 10762-10766] indicate that pseudo-atom introduction followed by refinement performs much better than iterative skeletonization: with the former method, a phase improvement of 15.3 degrees is obtained with respect to the initial density modification phases. With iterative skeletonization a phase degradation of 0.4 degrees is obtained. Consequently, the electron-density maps obtained using pseudo-atom phases or pseudo-atom phases combined with density-modification phases are much easier to interpret. These tests also show that for ornithine transcarbamoylase, where 12-fold non-crystallographic symmetry is present in the P1 crystals, G-function coupling leads to the simultaneous decrease of the conventional R factor and of the free R factor, a phenomenon which is not observed when non-crystallographic symmetry is absent from the crystal. The method is far less effective in such a case, and the results obtained suggest that the map sorting followed by refinement stage should be by-passed to obtain interpretable electron-density distributions. PMID:9761819
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Russo, Giulia
2016-07-01
Ben Amar and Bianca valuably reviewed the state of the art of fibrosis modeling approach scenario [1]. Each paragraph identifies and examines a specific theoretical tool according to their scale level (molecular, cellular or tissue). For each of them it is shown the area of application, along with a clear description of strong and weak points. This critical analysis denotes the necessity to develop a more suitable and original multiscale approach in the future [2].
Chadha, N; Jasuja, H; Kaur, M; Singh Bahia, M; Silakari, O
2014-01-01
Phosphoinositide 3-kinase alpha (PI3Kα) is a lipid kinase involved in several cellular functions such as cell growth, proliferation, differentiation and survival, and its anomalous regulation leads to cancerous conditions. PI3Kα inhibition completely blocks the cancer signalling pathway, hence it can be explored as an important therapeutic target for cancer treatment. In the present study, docking analysis of 49 selective imidazo[1,2-a]pyrazine inhibitors of PI3Kα was carried out using the QM-Polarized ligand docking (QPLD) program of the Schrödinger software, followed by the refinement of receptor-ligand conformations using the Hybrid Monte Carlo algorithm in the Liaison program, and alignment of refined conformations of inhibitors was utilized for the development of an atom-based 3D-QSAR model in the PHASE program. Among the five generated models, the best model was selected corresponding to PLS factor 2, displaying the highest value of Q(2)test (0.650). The selected model also displayed high values of r(2)train (0.917), F-value (166.5) and Pearson-r (0.877) and a low value of SD (0.265). The contour plots generated for the selected 3D-QSAR model were correlated with the results of docking simulations. Finally, this combined information generated from 3D-QSAR and docking analysis was used to design new congeners. PMID:24601789
Parallel adaptive mesh refinement within the PUMAA3D Project
NASA Technical Reports Server (NTRS)
Freitag, Lori; Jones, Mark; Plassmann, Paul
1995-01-01
To enable the solution of large-scale applications on distributed memory architectures, we are designing and implementing parallel algorithms for the fundamental tasks of unstructured mesh computation. In this paper, we discuss efficient algorithms developed for two of these tasks: parallel adaptive mesh refinement and mesh partitioning. The algorithms are discussed in the context of two-dimensional finite element solution on triangular meshes, but are suitable for use with a variety of element types and with h- or p-refinement. Results demonstrating the scalability and efficiency of the refinement algorithm and the quality of the mesh partitioning are presented for several test problems on the Intel DELTA.
Mead, T.C.; Sequeira, A.J.; Smith, B.F.
1981-10-13
An improved process is described for solvent refining lubricating oil base stocks from petroleum fractions containing both aromatic and nonaromatic constituents. The process utilizes n-methyl-2-pyrrolidone as a selective solvent for aromatic hydrocarbons wherein the refined oil fraction and the extract fraction are freed of final traces of solvent by stripping with gaseous ammonia. The process has several advantages over conventional processes including a savings in energy required for the solvent refining process, and reduced corrosion of the process equipment.
Parametric Rietveld refinement
Stinton, Graham W.; Evans, John S. O.
2007-01-01
In this paper the method of parametric Rietveld refinement is described, in which an ensemble of diffraction data collected as a function of time, temperature, pressure or any other variable are fitted to a single evolving structural model. Parametric refinement offers a number of potential benefits over independent or sequential analysis. It can lead to higher precision of refined parameters, offers the possibility of applying physically realistic models during data analysis, allows the refinement of ‘non-crystallographic’ quantities such as temperature or rate constants directly from diffraction data, and can help avoid false minima. PMID:19461841
Refining quadrilateral and brick element meshes
Schneiders, R.; Debye, J.
1995-12-31
We consider the problem of refining unstructured quadrilateral and brick element meshes. We present an algorithm which is a generalization of an algorithm developed by Cheng et. al. for structured quadrilateral element meshes. The problem is solved for the two-dimensional case. Concerning three dimensions we present a solution for some special cases and a general solution that introduces tetrahedral and pyramidal transition elements.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Orthogonal polynomials for refinable linear functionals
NASA Astrophysics Data System (ADS)
Laurie, Dirk; de Villiers, Johan
2006-12-01
A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.
Woodle, R.A.
1982-04-20
A dual solvent refining process is claimed for solvent refining petroleum based lubricating oil stocks with n-methyl-2-pyrrolidone as selective solvent for aromatic oils wherein a highly paraffinic oil having a narrow boiling range approximating the boiling point of n-methyl-2-pyrrolidone is employed as a backwash solvent. The process of the invention results in an increased yield of refined lubricating oil stock of a predetermined quality and simplifies separation of the solvents from the extract and raffinate oil fractions.
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
NASA Astrophysics Data System (ADS)
Napoli, Gaetano
2016-07-01
The term fibrosis refers to the development of fibrous connective tissue, in an organ or in a tissue, as a reparative response to injury or damage. The review article by Ben Amar and Bianca [1] proposes a unified multiscale approach for the modeling of fibrosis, accounting for phenomena occurring at different spatial scales (molecular, cellular and macroscopic). The main aim is to define a general unified framework able to describe the mechanisms, not yet completely understood, that trigger physiological and pathological fibrosis.
NASA Astrophysics Data System (ADS)
Copur, Yalcin
This study compares the modified kraft process, polysulfide pulping, one of the methods to obtain higher pulp yield, with conventional kraft method. More specifically, the study focuses on the refining effects of polysulfide pulp, which is an area with limited literature. Physical, mechanical and chemical properties of kraft and polysulfide pulps (4% elemental sulfur addition to cooking digester) cooked under the same conditions were studied as regards to their behavior under various PFI refining (0, 3000, 6000, 9000 revs.). Polysulfide (PS) pulping, compared to the kraft method, resulted in higher pulp yield and higher pulp kappa number. Polysulfide also gave pulp having higher tensile and burst index. However, the strength of polysulfide pulp, tear index at a constant tensile index, was found to be 15% lower as compared to the kraft pulp. Refining studies showed that moisture holding ability of chemical pulps mostly depends on the chemical nature of the pulp. Refining effects such as fibrillation and fine content did not have a significant effect on the hygroscopic behavior of chemical pulp.
REFINE WETLAND REGULATORY PROGRAM
The Tribes will work toward refining a regulatory program by taking a draft wetland conservation code with permitting incorporated to TEB for review. Progress will then proceed in developing a permit tracking system that will track both Tribal and fee land sites within reservati...
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
Parallel tetrahedral mesh refinement with MOAB.
Thompson, David C.; Pebay, Philippe Pierre
2008-12-01
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
Number systems, α-splines and refinement
NASA Astrophysics Data System (ADS)
Zube, Severinas
2004-12-01
This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case α=1+i, we prove better results. We introduce α-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of α-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of α-splines to subdivision with several examples. It turns out that α-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of α-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of α-splines: partition of unity and the refinement equation.
Issues in adaptive mesh refinement
Dai, William Wenlong
2009-01-01
In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.
Stacey, J.S.; Stoeser, D.B.; Greenwood, W.R.; Fischer, L.B.
1984-01-01
U/Pb zircon model ages for 11 major units from this region indicate three stages of evolution: 1) plate convergence, 2) plate collision and 3) post-orogenic intracratonic activity. Convergence occurred between the western Afif and eastern Ar Rayn plates that were separated by oceanic crust. Remnants of crust now comprise the ophiolitic complexes of the Urd group; the oldest plutonic unit studied is from one such complex, and gave an age of 694-698 m.y., while detrital zircons from an intercalated sedimentary formation were derived from source rocks with a mean age of 710 m.y. Plate convergence was terminated by collision of the two plates during the Al Amar orogeny which began at -670 m.y.; during collision, the Urd group rocks were deformed and in part obducted on to one or other of the plates. Synorogenic granitic rocks were intruded from 670 to 640 m.y., followed from 640 to 630 m.y. by unfoliated dioritic plutons emplaced in the Ar Rayn block.-R.A.H.
Worldwide refining and gas processing directory
1999-11-01
Statistics are presented on the following: US refining; Canada refining; Europe refining; Africa refining; Asia refining; Latin American refining; Middle East refining; catalyst manufacturers; consulting firms; engineering and construction; US gas processing; international gas processing; plant maintenance providers; process control and simulation systems; and trade associations.
Minimally refined biomass fuel
Pearson, Richard K.; Hirschfeld, Tomas B.
1984-01-01
A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-01-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
NASA Astrophysics Data System (ADS)
Kolev, Mikhail K.
2016-07-01
Over the last decades the collaboration between scientists from biology, medicine and pharmacology on one side and scholars from mathematics, physics, mechanics and computer science on the other has led to better understanding of the properties of living systems, the mechanisms of their functioning and interactions with the environment and to the development of new therapies for various disorders and diseases. The target paper [1] by Ben Amar and Bianca presents a detailed description of the research methods and techniques used by biomathematicians, bioinformaticians, biomechanicians and biophysicists for studying biological systems, and in particular in the context of pathological fibrosis.
Replacement, reduction and refinement.
Flecknell, Paul
2002-01-01
In 1959, William Russell and Rex Burch published "The Principles of Humane Experimental Technique". They proposed that if animals were to be used in experiments, every effort should be made to Replace them with non-sentient alternatives, to Reduce to a minimum the number of animals used, and to Refine experiments which used animals so that they caused the minimum pain and distress. These guiding principles, the "3 Rs" of animal research, were initially given little attention. Gradually, however, they have become established as essential considerations when animals are used in research. They have influenced new legislation aimed at controlling the use of experimental animals, and in the United Kingdom they have become formally incorporated into the Animal (Scientific) Procedures Act. The three principles, of Replacement, Reduction and Refinement, have also proven to be an area of common ground for research workers who use animals, and those who oppose their use. Scientists, who accept the need to use animals in some experiments, would also agree that it would be preferable not to use animals. If animals were to be used, as few as possible should be used and they should experience a minimum of pain or distress. Many of those who oppose animal experimentation, would also agree that until animal experimentation is stopped, Russell and Burch's 3Rs provide a means to improve animal welfare. It has also been recognised that adoption of the 3Rs can improve the quality of science. Appropriately designed experiments that minimise variation, provide standardised optimum conditions of animals care and minimise unnecessary stress or pain, often yield better more reliable data. Despite the progress made as a result of attention to these principles, several major problems have been identified. When replacing animals with alternative methods, it has often proven difficult to formally validate the alternative. This has proven a particular problem in regulatory toxicology
Thailand: refining cultural values.
Ratanakul, P
1990-01-01
In the second of a set of three articles concerned with "bioethics on the Pacific Rim," Ratanakul, director of a research center for Southeast Asian cultures in Thailand, provides an overview of bioethical issues in his country. He focuses on four issues: health care allocation, AIDS, determination of death, and euthanasia. The introduction of Western medicine into Thailand has brought with it a multitude of ethical problems created in part by tension between Western and Buddhist values. For this reason, Ratanakul concludes that "bioethical enquiry in Thailand must not only examine ethical dilemmas that arise in the actual practice of medicine and research in the life sciences, but must also deal with the refinement and clarification of applicable Thai cultural and moral values." PMID:2318624
Towards automated crystallographic structure refinement with phenix.refine.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Echols, Nathaniel; Headd, Jeffrey J; Moriarty, Nigel W; Mustyakimov, Marat; Terwilliger, Thomas C; Urzhumtsev, Alexandre; Zwart, Peter H; Adams, Paul D
2012-04-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. PMID:22505256
Towards automated crystallographic structure refinement with phenix.refine
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W.; Mustyakimov, Marat; Terwilliger, Thomas C.; Urzhumtsev, Alexandre; Zwart, Peter H.; Adams, Paul D.
2012-01-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. PMID:22505256
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.
1998-06-01
The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate
Refinement of protein dynamic structure: normal mode refinement.
Kidera, A; Go, N
1990-01-01
An x-ray crystallographic refinement method, referred to as the normal mode refinement, is proposed. The Debye-Waller factor is expanded in terms of the effective normal modes whose amplitudes and eigenvectors are experimentally determined by the crystallographic refinement. In contrast to the conventional method, the atomic motions are treated generally as anisotropic and concerted. This method is assessed by using the simulated x-ray data given by a Monte Carlo simulation of human lysozyme. In this article, we refine the dynamic structure by fixing the average static structure to exact coordinates. It is found that the normal mode refinement, using a smaller number of variables, gives a better R factor and more information on the dynamics (anisotropy and collectivity in the motion). Images PMID:2339115
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
Gradualness facilitates knowledge refinement.
Rada, R
1985-05-01
To facilitate knowledge refinement, a system should be designed so that small changes in the knowledge correspond to small changes in the function or performance of the system. Two sets of experiments show the value of small, heuristically guided changes in a weighted rule base. In the first set, the ordering among numbers (reflecting certainties) makes their manipulation more straightforward than the manipulation of relationships. A simple credit assignment and weight adjustment strategy for improving numbers in a weighted, rule-based expert system is presented. In the second set, the rearrangement of predicates benefits from additional knowledge about the ``ordering'' among predicates. A third set of experiments indicates the importance of the proper level of granularity when augmenting a knowledge base. Augmentation of one knowledge base by analogical reasoning from another knowledge base did not work with only binary relationships, but did succeed with ternary relationships. To obtain a small improvement in the knowledge base, a substantial amount of structure had to be treated as a unit. PMID:21869290
High resolution single particle refinement in EMAN2.1.
Bell, James M; Chen, Muyuan; Baldwin, Philip R; Ludtke, Steven J
2016-05-01
EMAN2.1 is a complete image processing suite for quantitative analysis of grayscale images, with a primary focus on transmission electron microscopy, with complete workflows for performing high resolution single particle reconstruction, 2-D and 3-D heterogeneity analysis, random conical tilt reconstruction and subtomogram averaging, among other tasks. In this manuscript we provide the first detailed description of the high resolution single particle analysis pipeline and the philosophy behind its approach to the reconstruction problem. High resolution refinement is a fully automated process, and involves an advanced set of heuristics to select optimal algorithms for each specific refinement task. A gold standard FSC is produced automatically as part of refinement, providing a robust resolution estimate for the final map, and this is used to optimally filter the final CTF phase and amplitude corrected structure. Additional methods are in-place to reduce model bias during refinement, and to permit cross-validation using other computational methods. PMID:26931650
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Madani, Safoura; Coors, Anja; Haddioui, Abdelmajid; Ksibi, Mohamed; Pereira, Ruth; Paulo Sousa, José; Römbke, Jörg
2015-09-01
Mining activity is an important economic activity in several North Atlantic Treaty Organization (NATO) and North African countries. Within their territory derelict or active mining explorations represent risks to surrounding ecosystems, but engineered-based remediation processes are usually too expensive to be an option for the reclamation of these areas. A project funded by NATO was performed, with the aim of finding a more eco-friendly solution for reclamation of these areas. As part of an overall risk assessment, the risk of contaminated soils to selected soil organisms was evaluated. The main question addressed was: Does the metal-contaminated soils from a former iron mine located at Ait Amar (Morocco),which was abandoned in the mid-Sixties, affect the reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer)? Soil samples were taken at 20 plots along four transects covering the mine area and at a reference site about 15km away from the mine. The soils were characterized pedologically and chemically, which showed a heterogeneous pattern of metal contamination (mainly cadmium, copper, and chromium, sometimes at concentrations higher than European soil trigger values). The reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer) was studied using standard laboratory tests according to OECD guidelines 220 (2004) and 226 (2008). The number of juveniles of E. bigeminus was reduced at several plots with high concentrations of Cd or Cu (the latter in combination with low pH values). There was nearly no effect of the metal contaminated soils on the reproduction of H. aculeifer. The overall lack of toxicity at the majority of the studied plots is probably caused by the low availability of the metals in these soils unless soil pH was very low. Different exposure pathways are likely responsible for the different reaction of mites and enchytraeids (hard-bodied versus soft-bodied organisms). The
Deformable complex network for refining low-resolution X-ray structures
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint with the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.
Toward a consistent framework for high order mesh refinement schemes in numerical relativity
NASA Astrophysics Data System (ADS)
Mongwane, Bishop
2015-05-01
It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothly match the phase velocity of coarser grids. We apply the method to test problems involving propagating waves and show a significant reduction in spurious reflections.
Error bounds from extra precise iterative refinement
Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason
2005-02-07
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors. PMID:25567568
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Refining the shifted topological vertex
Drissi, L. B.; Jehjouh, H.; Saidi, E. H.
2009-01-15
We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.
Ideal Downward Refinement in the EL Description Logic
NASA Astrophysics Data System (ADS)
Lehmann, Jens; Haase, Christoph
With the proliferation of the Semantic Web, there has been a rapidly rising interest in description logics, which form the logical foundation of the W3C standard ontology language OWL. While the number of OWL knowledge bases grows, there is an increasing demand for tools assisting knowledge engineers in building up and maintaining their structure. For this purpose, concept learning algorithms based on refinement operators have been investigated. In this paper, we provide an ideal refinement operator for the description logic EL and show that it is computationally feasible on large knowledge bases.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
NASA Astrophysics Data System (ADS)
Kachapova, Farida
2016-07-01
Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.
NASA Astrophysics Data System (ADS)
Guerrini, Luca
2016-07-01
Martine Ben Amar and Carlo Bianca have written a valuable paper [1], which is a timely review of the different theoretical tools for the modeling of physiological and pathological fibrosis existing in the literature. The review [1] is written with clarity and in a simple way, which makes it understandable to a wide audience. The author presents an exhaustive exposition of the interplay between the different scholars which works in the modeling of fibrosis diseases and a survey of the main theoretical approaches, among others, ODE-based models, PDE-based models, models with internal structure, mechanics of continuum approach, agent-based models. A critical analysis discusses their applicability, including advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Model Refinement Using Eigensystem Assignment
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
2000-01-01
IA novel approach for the refinement of finite-element-based analytical models of flexible structures is presented. The proposed approach models the possible refinements in the mass, damping, and stiffness matrices of the finite element model in the form of a constant gain feedback with acceleration, velocity, and displacement measurements, respectively. Once the free elements of the structural matrices have been defined, the problem of model refinement reduces to obtaining position, velocity, and acceleration gain matrices with appropriate sparsity that reassign a desired subset of the eigenvalues of the model, along with partial mode shapes, from their baseline values to those obtained from system identification test data. A sequential procedure is used to assign one conjugate pair of eigenvalues at each step using symmetric output feedback gain matrices, and the eigenvectors are partially assigned, while ensuring that the eigenvalues assigned in the previous steps are not disturbed. The procedure can also impose that gain matrices be dissipative to guarantee the stability of the refined model. A numerical example, involving finite element model refinement for a structural testbed at NASA Langley Research Center (Controls-Structures-Interaction Evolutionary model) is presented to demonstrate the feasibility of the proposed approach.
Zone refining of plutonium metal
Blau, M.S.
1994-08-01
The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, {delta}-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Evolutionary optimization of a Genetically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Tinker, Michael L.; Dozier, Gerry
2005-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This paper will present a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: Genetic Algorithms and Differential Evolution to successfully optimize a benchmark structural optimization problem. An non-traditional solution to the benchmark problem is presented in this paper, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. . Dept. of Computer Science Oak Ridge National Lab., TN )
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
Multigrid for locally refined meshes
Shapira, Y.
1999-12-01
A multilevel method for the solution of finite element schemes on locally refined meshes is introduced. For isotropic diffusion problems, the condition number of the two-level method is bounded independently of the mesh size and the discontinuities in the diffusion coefficient. The curves of discontinuity need not be aligned with the coarse mesh. Indeed, numerical applications with 10 levels of local refinement yield a rapid convergence of the corresponding 10-level, multigrid V-cycle and other multigrid cycles which are more suitable for parallelism even when the discontinuities are invisible on most of the coarse meshes.
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
Structured adaptive mesh refinement on the connection machine
Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )
1993-01-01
Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.
Method for refining contaminated iridium
Heshmatpour, B.; Heestand, R.L.
1982-08-31
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Method for refining contaminated iridium
Heshmatpour, Bahman; Heestand, Richard L.
1983-01-01
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
ERIC Educational Resources Information Center
Hazelton, Alexander E.; And Others
Through joint planning with a number of school districts and the Region X Title I Technical Assistance Center, and with the help of a Title I Refinement grant, Alaska has developed a system of data storage and retrieval using microcomputers that assists small school districts in the evaluation and reporting of their Title I programs. Although this…
Vacuum Refining of Molten Silicon
NASA Astrophysics Data System (ADS)
Safarian, Jafar; Tangstad, Merete
2012-12-01
Metallurgical fundamentals for vacuum refining of molten silicon and the behavior of different impurities in this process are studied. A novel mass transfer model for the removal of volatile impurities from silicon in vacuum induction refining is developed. The boundary conditions for vacuum refining system—the equilibrium partial pressures of the dissolved elements and their actual partial pressures under vacuum—are determined through thermodynamic and kinetic approaches. It is indicated that the vacuum removal kinetics of the impurities is different, and it is controlled by one, two, or all the three subsequent reaction mechanisms—mass transfer in a melt boundary layer, chemical evaporation on the melt surface, and mass transfer in the gas phase. Vacuum refining experimental results of this study and literature data are used to study the model validation. The model provides reliable results and shows correlation with the experimental data for many volatile elements. Kinetics of phosphorus removal, which is an important impurity in the production of solar grade silicon, is properly predicted by the model, and it is observed that phosphorus elimination from silicon is significantly increased with increasing process temperature.
GRAIN REFINEMENT OF URANIUM BILLETS
Lewis, L.
1964-02-25
A method of refining the grain structure of massive uranium billets without resort to forging is described. The method consists in the steps of beta- quenching the billets, annealing the quenched billets in the upper alpha temperature range, and extrusion upset of the billets to an extent sufficient to increase the cross sectional area by at least 5 per cent. (AEC)
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations. PMID:26723635
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Refining image segmentation by integration of edge and region data
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Tilton, James C.
1992-01-01
An iterative parallel region growing (IPRG) algorithm previously developed by Tilton (1989) produces hierarchical segmentations of images from finer to coarser resolution. An ideal segmentation does not always correspond to one single iteration but to several different ones, each one producing the 'best' result for a separate part of the image. With the goal of finding this ideal segmentation, the results of the IPRG algorithm are refined by utilizing some additional information, such as edge features, and by interpreting the tree of hierarchical regions.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...
Time Critical Isosurface Refinement and Smoothing
Pascucci, V.; Bajaj, C.L.
2000-07-10
Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction of such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) the inability to maintain interactivity when the displayed surface changes frequently, (ii) inability to control the global geometry of the embedding (no self-intersections) of any approximated level of detail of the output surface. In this paper we introduce a technique for on-line construction and smoothing of progressive isosurfaces. Our hybrid approach combines the flexibility of a progressive multi-resolution representation with the advantages of a recursive sub-division scheme. Our main contributions are: (i) a progressive algorithm that builds a multi-resolution surface by successive refinements so that a coarse representation of the output is generated as soon as a coarse representation of the input is provided, (ii) application of the same scheme to smooth the surface by means of a 3D recursive subdivision rule, (iii) a multi-resolution representation where any adaptively selected level of detail surface is guaranteed to be free of self-intersections.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Entitlements exemptions for new refiners
Not Available
1980-02-29
The practice of exempting start-up inventories from entitlement requirements for new refiners has been called into question by the Office of Hearings and Appeals and other responsible Departmental officials. ERA with the assistance of the Office of General Counsel considering resolving the matter through rulemaking; however, by October 26, 1979 no rulemaking had been published. Because of the absence of published standards for use in granting these entitlements to new refineries, undue reliance was placed on individual judgements that could result in inequities to applicants and increase the potential for fraud and abuse. Recommendations are given as follows: (1) if the program for granting entitlements exemptions to new refiners is continued, the Administrator, ERA should promptly take action to adopt an appropriate regulation to formalize the program by establishing standards and controls that will assure consistent and equitable application; in addition, files containing adjustments given to new refiners should be made complete to support benefits already allowed; and (2) whether the program is continued or discontinued, the General Counsel and the Administrator, ERA, should coordiate on how to evaluate the propriety of inventory adjustments previously granted to new refineries.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
A Refined Cauchy-Schwarz Inequality
ERIC Educational Resources Information Center
Mercer, Peter R.
2007-01-01
The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.
Fast transport simulation with an adaptive grid refinement.
Haefner, Frieder; Boy, Siegrun
2003-01-01
One of the main difficulties in transport modeling and calibration is the extraordinarily long computing times necessary for simulation runs. Improved execution time is a prerequisite for calibration in transport modeling. In this paper we investigate the problem of code acceleration using an adaptive grid refinement, neglecting subdomains, and devising a method by which the Courant condition can be ignored while maintaining accurate solutions. Grid refinement is based on dividing selected cells into regular subcells and including the balance equations of subcells in the equation system. The connection of coarse and refined cells satisfies the mass balance with an interpolation scheme that is implicitly included in the equation system. The refined subdomain can move with the average transport velocity of the subdomain. Very small time steps are required on a fine or a refined grid, because of the combined effect of the Courant and Peclet conditions. Therefore, we have developed a special upwind technique in small grid cells with high velocities (velocity suppression). We have neglected grid subdomains with very small concentration gradients (zero suppression). The resulting software, MODCALIF, is a three-dimensional, modularly constructed FORTRAN code. For convenience, the package names used by the well-known MODFLOW and MT3D computer programs are adopted, and the same input file structure and format is used, but the program presented here is separate and independent. Also, MODCALIF includes algorithms for variable density modeling and model calibration. The method is tested by comparison with an analytical solution, and illustrated by means of a two-dimensional theoretical example and three-dimensional simulations of the variable-density Cape Cod and SALTPOOL experiments. Crossing from fine to coarse grid produces numerical dispersion when the whole subdomain of interest is refined; however, we show that accurate solutions can be obtained using a fraction of the
Vortex-dominated conical-flow computations using unstructured adaptively-refined meshes
NASA Technical Reports Server (NTRS)
Batina, John T.
1989-01-01
A conical Euler/Navier-Stokes algorithm is presented for the computation of vortex-dominated flows. The flow solver involves a multistage Runge-Kutta time stepping scheme which uses a finite-volume spatial discretization on an unstructured grid made up of triangles. The algorithm also employs an adaptive mesh refinement procedure which enriches the mesh locally to more accurately resolve the vortical flow features. Results are presented for several highly-swept delta wing and circular cone cases at high angles of attack and at supersonic freestream flow conditions. Accurate solutions were obtained more efficiently when adaptive mesh refinement was used in contrast with refining the grid globally. The paper presents descriptions of the conical Euler/Navier-Stokes flow solver and adaptive mesh refinement procedures along with results which demonstrate the capability.
Firing of pulverized solvent refined coal
Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.
1986-01-01
An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.
Grain Refinement of Deoxidized Copper
NASA Astrophysics Data System (ADS)
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-08-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor (Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
Solvent refined coal (SRC) process
Not Available
1980-12-01
This report summarizes the progress of the Solvent Refined Coal (SRC) project by The Pittsburg and Midway Coal Mining Co. at the SRC Pilot Plant in Fort Lewis, Washington and the Gulf Science and Technology Company Process Development Unit (P-99) in Harmarville, Pennsylvania, for the Department of Energy during the month of October, 1980. The Fort Lewis Pilot Plant was shut down the entire month of October, 1980 for inspection and maintenance. PDU P-99 completed two runs during October investigating potential start-up modes for the Demonstration Plant.
Winter, V.L.; Berg, R.S.; Dalton, L.J.
1998-06-01
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Khokhlov, A. M.
1997-01-01
A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.
Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N
2004-04-15
We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for...
Introducing robustness to maximum-likelihood refinement of electron-microsopy data
Scheres, Sjors H. W. Carazo, José-María
2009-07-01
An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.
Zone refining of plutonium metal
1997-05-01
The purpose of this study was to investigate zone refining techniques for the purification of plutonium metal. The redistribution of 10 impurity elements from zone melting was examined. Four tantalum boats were loaded with plutonium impurity alloy, placed in a vacuum furnace, heated to 700{degrees}C, and held at temperature for one hour. Ten passes were made with each boat. Metallographic and chemical analyses performed on the plutonium rods showed that, after 10 passes, moderate movement of certain elements were achieved. Molten zone speeds of 1 or 2 inches per hour had no effect on impurity element movement. Likewise, the application of constant or variable power had no effect on impurity movement. The study implies that development of a zone refining process to purify plutonium is feasible. Development of a process will be hampered by two factors: (1) the effect on impurity element redistribution of the oxide layer formed on the exposed surface of the material is not understood, and (2) the tantalum container material is not inert in the presence of plutonium. Cold boat studies are planned, with higher temperature and vacuum levels, to determine the effect on these factors. 5 refs., 1 tab., 5 figs.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Materials refining on the Moon
NASA Astrophysics Data System (ADS)
Landis, Geoffrey A.
2007-05-01
Oxygen, metals, silicon, and glass are raw materials that will be required for long-term habitation and production of structural materials and solar arrays on the Moon. A process sequence is proposed for refining these materials from lunar regolith, consisting of separating the required materials from lunar rock with fluorine. The fluorine is brought to the Moon in the form of potassium fluoride, and is liberated from the salt by electrolysis in a eutectic salt melt. Tetrafluorosilane produced by this process is reduced to silicon by a plasma reduction stage; the fluorine salts are reduced to metals by reaction with metallic potassium. Fluorine is recovered from residual MgF and CaF2 by reaction with K2O.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Silicon refinement by chemical vapor transport
NASA Technical Reports Server (NTRS)
Olson, J.
1984-01-01
Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Refining the shallow slip deficit
NASA Astrophysics Data System (ADS)
Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois
2016-03-01
Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Pneumatic conveying of pulverized solvent refined coal
Lennon, Dennis R.
1984-11-06
A method for pneumatically conveying solvent refined coal to a burner under conditions of dilute phase pneumatic flow so as to prevent saltation of the solvent refined coal in the transport line by maintaining the transport fluid velocity above approximately 95 ft/sec.
Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.
2014-09-01
SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-01-01
A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ1 and χ2 dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ1 and χ2 values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved. PMID:24699648
Improving Flow Response of a Variable-rate Aerial Application System by Interactive Refinement
Technology Transfer Automated Retrieval System (TEKTRAN)
Experiments were conducted to evaluate response of a variable-rate aerial application controller to changing flow rates and to improve its response at correspondingly varying system pressures. System improvements have been made by refinement of the control algorithms over time in collaboration with ...
North Dakota Refining Capacity Study
Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca
2011-01-05
According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.
Refinement of boards' role required.
Umbdenstock, R J
1987-01-01
The governing board's role in health care is not changing, but new competitive forces necessitate a refinement of the board's approach to fulfilling its role. In a free-standing, community, not-for-profit hospital, the board functions as though it were the "owner." Although it does not truly own the facility in the legal sense, the board does have legal, fiduciary, and financial responsibilities conferred on it by the state. In a religious-sponsored facility, the board fulfills these same obligations on behalf of the sponsoring institute, subject to the institute's reserved powers. In multi-institutional systems, the hospital board's power and authority depend on the role granted it by the system. Boards in all types of facilities are currently faced with the following challenges: Fulfilling their basic responsibilities, such as legal requirements, financial duties, and obligations for the quality of care. Encouraging management and the board itself to "think strategically" in attacking new competitive market forces while protecting the organization's traditional mission and values. Assessing recommended strategies in light of consequences if constituencies think the organization is abandoning its commitments. Boards can take several steps to match their mode of operation with the challenges of the new environment. Boards must rededicate themselves to the hospital's mission. Trustees must expand their understanding of health care trends and issues and their effect on the organization. Boards must evaluate and help strengthen management's performance, rather than acting as a "watchdog" in an adversarial position. Boards must think strategically, rather than focusing solely on operational details. Boards must evaluate the methods they use for conducting business. PMID:10280356
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Firing of pulverized solvent refined coal
Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.
1990-05-15
A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.
Strategies for hp-adaptive Refinement
Mitchell, William F.
2008-09-01
In the hp-adaptive version of the finite element method for solving partial differential equations, the grid is adaptively refined in both h, the size of the elements, and p, the degree of the piecewise polynomial approximation over the element. The selection of which elements to refine is determined by a local a posteriori error indicator, and is well established. But the determination of whether the element should be refined by h or p is still open. In this paper, we describe several strategies that have been proposed for making this determination. A numerical example to illustrate the effectiveness of these strategies will be presented.
Refining of metallurgical-grade silicon
NASA Technical Reports Server (NTRS)
Dietl, J.
1986-01-01
A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.
A Selective Refinement Approach for Computing the Distance Functions of Curves
Laney, D A; Duchaineau, M A; Max, N L
2000-12-01
We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics
Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506
Refined Phenotyping of Modic Changes
Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino
2016-01-01
. The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
NASA Astrophysics Data System (ADS)
Rosenberg, D.; Pouquet, A.; Mininni, P. D.
2007-08-01
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang (OT) vortex made up of a magnetic X-point centred on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsässer formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context (Rosenberg et al 2006 J. Comput. Phys. 215 59-80) the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the OT solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers that the algorithms using both static and refined grids reproduce the pseudo-spectral solutions quite well. We show that low-order truncation—even with a comparable number of global degrees of freedom—fails to correctly model some strong (sup-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.
U.S. Refining Capacity Utilization
1995-01-01
This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.
1991 worldwide refining and gas processing directory
Not Available
1990-01-01
This book ia an authority for immediate information on the industry. You can use it to find new business, analyze market trends, and to stay in touch with existing contacts while making new ones. The possibilities for business applications are numerous. Arranged by country, all listings in the directory include address, phone, fax and telex numbers, a description of the company's activities, names of key personnel and their titles, corporate headquarters, branch offices and plant sites. This newly revised edition lists more than 2000 companies and nearly 3000 branch offices and plant locations. This east-to-use reference also includes several of the most vital and informative surveys of the industry, including the U.S. Refining Survey, the Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Wax Capacities Report from the National Petroleum Refiner's Association.
Refiners to the front: Unsung heroes revisited
Not Available
1989-09-29
Crude-oil purchasing and finished-product selling can be linked to a constant volley, with two potentially deadly pricing games going on simultaneously. Nothing new to refiners, who are often viewed by those upstream and downstream of them as a necessary evil mid-point between the wellhead and the retail pump. Recent comparative stability in the margins refiners achieve on a barrel of crude oil, however, confers good things to producers and product marketers. This issue editorializes against taking refiners for granted. This issue also presents the following: (a) ED refining netback data series for the US Gulf and West Coasts Rotterdam, and Singapore as of September 22, 1989; and (b) ED fuel price/tax series for countries of the Eastern Hemisphere, September 1989 edition. 6 fig., 5 tabs.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
Structure refinement from precession electron diffraction data.
Palatinus, Lukáš; Jacob, Damien; Cuvillier, Priscille; Klementová, Mariana; Sinkler, Wharton; Marks, Laurence D
2013-03-01
Electron diffraction is a unique tool for analysing the crystal structures of very small crystals. In particular, precession electron diffraction has been shown to be a useful method for ab initio structure solution. In this work it is demonstrated that precession electron diffraction data can also be successfully used for structure refinement, if the dynamical theory of diffraction is used for the calculation of diffracted intensities. The method is demonstrated on data from three materials - silicon, orthopyroxene (Mg,Fe)(2)Si(2)O(6) and gallium-indium tin oxide (Ga,In)(4)Sn(2)O(10). In particular, it is shown that atomic occupancies of mixed crystallographic sites can be refined to an accuracy approaching X-ray or neutron diffraction methods. In comparison with conventional electron diffraction data, the refinement against precession diffraction data yields significantly lower figures of merit, higher accuracy of refined parameters, much broader radii of convergence, especially for the thickness and orientation of the sample, and significantly reduced correlations between the structure parameters. The full dynamical refinement is compared with refinement using kinematical and two-beam approximations, and is shown to be superior to the latter two. PMID:23403968
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.
Refining a relativistic, hydrodynamic solver: Admitting ultra-relativistic flows
NASA Astrophysics Data System (ADS)
Bernstein, J. P.; Hughes, P. A.
2009-09-01
We have undertaken the simulation of hydrodynamic flows with bulk Lorentz factors in the range 102-106. We discuss the application of an existing relativistic, hydrodynamic primitive variable recovery algorithm to a study of pulsar winds, and, in particular, the refinement made to admit such ultra-relativistic flows. We show that an iterative quartic root finder breaks down for Lorentz factors above 102 and employ an analytic root finder as a solution. We find that the former, which is known to be robust for Lorentz factors up to at least 50, offers a 24% speed advantage. We demonstrate the existence of a simple diagnostic allowing for a hybrid primitives recovery algorithm that includes an automatic, real-time toggle between the iterative and analytical methods. We further determine the accuracy of the iterative and hybrid algorithms for a comprehensive selection of input parameters and demonstrate the latter’s capability to elucidate the internal structure of ultra-relativistic plasmas. In particular, we discuss simulations showing that the interaction of a light, ultra-relativistic pulsar wind with a slow, dense ambient medium can give rise to asymmetry reminiscent of the Guitar nebula leading to the formation of a relativistic backflow harboring a series of internal shockwaves. The shockwaves provide thermalized energy that is available for the continued inflation of the PWN bubble. In turn, the bubble enhances the asymmetry, thereby providing positive feedback to the backflow.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2002-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called "structured" (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2003-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Zeolites as catalysts in oil refining.
Primo, Ana; Garcia, Hermenegildo
2014-11-21
Oil is nowadays the main energy source and this prevalent position most probably will continue in the next decades. This situation is largely due to the degree of maturity that has been achieved in oil refining and petrochemistry as a consequence of the large effort in research and innovation. The remarkable efficiency of oil refining is largely based on the use of zeolites as catalysts. The use of zeolites as catalysts in refining and petrochemistry has been considered as one of the major accomplishments in the chemistry of the XXth century. In this tutorial review, the introductory part describes the main features of zeolites in connection with their use as solid acids. The main body of the review describes important refining processes in which zeolites are used including light naphtha isomerization, olefin alkylation, reforming, cracking and hydrocracking. The final section contains our view on future developments in the field such as the increase in the quality of the transportation fuels and the coprocessing of increasing percentage of biofuels together with oil streams. This review is intended to provide the rudiments of zeolite science applied to refining catalysis. PMID:24671148
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
A Cartesian grid approach with hierarchical refinement for compressible flows
NASA Technical Reports Server (NTRS)
Quirk, James J.
1994-01-01
Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.
On-Orbit Model Refinement for Controller Redesign
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
1998-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner obtain approval as... refiner status must be submitted to EPA by December 31, 2007. (b) For U.S. Postal delivery, applications...), for the period January 1, 2005 through December 31, 2005. (ii) The information submitted to EIA...
Minimally refined biomass fuels: an economic shortcut
Pearson, R.K.; Hirschfeld, T.B.
1980-07-01
An economic shortcut can be realized if the sugars from which ethanol is made are utilized directly as concentrated aqueous solutions for fuels rather than by further refining them through fermentation and distillation steps. Simple evaporation of carbohydrate solutions from sugar cane or sweet sorghum, or from hydrolysis of starch or cellulose content of many plants yield potential liquid fuels of energy contents (on a volume basis) comparable to highly refined liquid fuels like methanol and ethanol. The potential utilization of such minimally refined biomass derived fuels is discussed and the burning of sucrose-ethanol-water solutions in a small modified domestic burner is demonstrated. Other potential uses of sugar solutions or emulsion and microemulsions in fuel oils for use in diesel or turbine engines are proposed and discussed.
Terahertz spectroscopy for quantifying refined oil mixtures.
Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui
2012-08-20
In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017
Quantum algebraic approach to refined topological vertex
NASA Astrophysics Data System (ADS)
Awata, H.; Feigin, B.; Shiraishi, J.
2012-03-01
We establish the equivalence between the refined topological vertex of Iqbal-Kozcaz-Vafa and a certain representation theory of the quantum algebra of type W 1+∞ introduced by Miki. Our construction involves trivalent intertwining operators Φ and Φ* associated with triples of the bosonic Fock modules. Resembling the topological vertex, a triple of vectors ∈ {mathbb{Z}^2} is attached to each intertwining operator, which satisfy the Calabi-Yau and smoothness conditions. It is shown that certain matrix elements of Φ and Φ* give the refined topological vertex C λ μν ( t, q) of Iqbal-Kozcaz-Vafa. With another choice of basis, we recover the refined topological vertex C λ μ ν ( q, t) of Awata-Kanno. The gluing factors appears correctly when we consider any compositions of Φ and Φ*. The spectral parameters attached to Fock spaces play the role of the Kähler parameters.
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
Increasing levels of assistance in refinement of knowledge-based retrieval systems
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Kedar, Smadar; Pell, Barney
1994-01-01
The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2015-06-09
A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Energy Science and Technology Software Center (ESTSC)
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
Image segmentation by background extraction refinements
NASA Technical Reports Server (NTRS)
Rodriguez, Arturo A.; Mitchell, O. Robert
1990-01-01
An image segmentation method refining background extraction in two phases is presented. In the first phase, the method detects homogeneous-background blocks and estimates the local background to be extracted throughout the image. A block is classified homogeneous if its left and right standard deviations are small. The second phase of the method refines background extraction in nonhomogeneous blocks by recomputing the shoulder thresholds. Rules that predict the final background extraction are derived by observing the behavior of successive background statistical measurements in the regions under the presence of dark and/or bright object pixels. Good results are shown for a number of outdoor scenes.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.551 How does a refiner obtain approval as a small refiner under this subpart?...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.551 How does a refiner obtain approval as a small refiner under this subpart?...
Robust Refinement as Implemented in TOPAS
Stone, K.; Stephens, P
2010-01-01
A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
Gravitational Collapse With Distributed Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Liebling, Steven; Lehner, Luis; Motl, Patrick; Neilsen, David; Rahman, Tanvir; Reula, Oscar
2006-04-01
Gravitational collapse is studied using distributed adaptive mesh refinement (AMR). The AMR infrastructure includes a novel treatment of adaptive boundaries which allows for high orders of accuracy. Results of the collapse of Brill waves to black holes are presented. Combining both vertex centered and cell centered fields in the same evolution is discussed.
Refiners respond to strategic driving forces
Gonzalez, R.G.
1996-05-01
Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.
Energy Bandwidth for Petroleum Refining Processes
none,
2006-10-01
The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.
Laser furnace technology for zone refining
NASA Technical Reports Server (NTRS)
Griner, D. B.
1984-01-01
A carbon dioxide laser experiment facility is constructed to investigate the problems in using a laser beam to zone refine semiconductor and metal crystals. The hardware includes a computer to control scan mirrors and stepper motors to provide a variety of melt zone patterns. The equipment and its operating procedures are described.
Extended query refinement for medical image retrieval.
Deserno, Thomas M; Güld, Mark O; Plodowski, Bartosz; Spitzer, Klaus; Wein, Berthold B; Schubert, Henning; Ney, Hermann; Seidl, Thomas
2008-09-01
The impact of image pattern recognition on accessing large databases of medical images has recently been explored, and content-based image retrieval (CBIR) in medical applications (IRMA) is researched. At the present, however, the impact of image retrieval on diagnosis is limited, and practical applications are scarce. One reason is the lack of suitable mechanisms for query refinement, in particular, the ability to (1) restore previous session states, (2) combine individual queries by Boolean operators, and (3) provide continuous-valued query refinement. This paper presents a powerful user interface for CBIR that provides all three mechanisms for extended query refinement. The various mechanisms of man-machine interaction during a retrieval session are grouped into four classes: (1) output modules, (2) parameter modules, (3) transaction modules, and (4) process modules, all of which are controlled by a detailed query logging. The query logging is linked to a relational database. Nested loops for interaction provide a maximum of flexibility within a minimum of complexity, as the entire data flow is still controlled within a single Web page. Our approach is implemented to support various modalities, orientations, and body regions using global features that model gray scale, texture, structure, and global shape characteristics. The resulting extended query refinement has a significant impact for medical CBIR applications. PMID:17497197
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
A parallel algorithm for mesh smoothing
Freitag, L.; Jones, M.; Plassmann, P.
1999-07-01
Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2014-11-25
This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.
Dinosaurs can fly -- High performance refining
Treat, J.E.
1995-09-01
High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.
Research Burnout: a refined multidimensional scale.
Singh, Surendra N; Dalal, Nikunj; Mishra, Sanjay
2004-12-01
In a prevailing academic climate where there are high expectations for faculty to publish and generate grants, the exploration of Research Burnout among higher education faculty has become increasingly important. Unfortunately, it is a topic that has not been well researched empirically. In 1997 Singh and Bush developed a unidimensional scale to measure Research Burnout. A closer inspection of the definition of this construct and the composition of its items suggests, however, that the construct may be multidimensional and analogous to Maslach's Psychological Burnout Scale. In this paper, we propose a refined, multidimensional Research Burnout scale and test its factorial validity using confirmatory factor analysis. The nomological validity of this refined scale is established by examining hypothesized relationships between Research Burnout and other constructs such as Intrinsic Motivation for doing research, Extrinsic Pressures to do research, and Knowledge Obsolescence. PMID:15762409
Substance abuse in the refining industry
Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )
1989-01-01
In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.
Structured Adaptive Mesh Refinement Application Infrastructure
Energy Science and Technology Software Center (ESTSC)
2010-07-15
SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Arctic Storms in a Regionally Refined Atmospheric General Circulation Model
NASA Astrophysics Data System (ADS)
Roesler, E. L.; Taylor, M.; Boslough, M.; Sullivan, S.
2014-12-01
Regional refinement in an atmospheric general circulation model is a new tool in atmospheric modeling. A regional high-resolution solution can be obtained without the computational cost of running a global high-resolution simulation as global climate models have increasing ability to resolve smaller spatial scales. Previous work has shown high-resolution simulations, i.e. 1/8 degree, and variable resolution utilities have resolved more fine-scale structure and mesoscale storms in the atmosphere than their low-resolution counterparts. We will describe an experiment designed to identify and study Arctic storms at two model resolutions. We used the Community Atmosphere Model, version 5, with the Spectral Element dynamical core at 1/8-degree and 1 degree horizontal resolutions to simulate the climatological year of 1850. Storms were detected using a low-pressure minima and vorticity maxima - finding algorithm. It was found the high-resolution 1/8-degree simulation had more storms in the Northern Hemisphere than the low-resolution 1-degree simulation. A variable resolution simulation with a global low resolution of 1-degree and a high-resolution refined region of 1/8 degree over a region in the Arctic is planned. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2014-16460A
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Refining a triangulation of a planar straight-line graph to eliminate large angles
Mitchell, S.A.
1993-05-13
Triangulations without large angles have a number of applications in numerical analysis and computer graphics. In particular, the convergence of a finite element calculation depends on the largest angle of the triangulation. Also, the running time of a finite element calculation is dependent on the triangulation size, so having a triangulation with few Steiner points is also important. Bern, Dobkin and Eppstein pose as an open problem the existence of an algorithm to triangulate a planar straight-line graph (PSLG) without large angles using a polynomial number of Steiner points. We solve this problem by showing that any PSLG with {upsilon} vertices can be triangulated with no angle larger than 7{pi}/8 by adding O({upsilon}{sup 2}log {upsilon}) Steiner points in O({upsilon}{sup 2} log{sup 2} {upsilon}) time. We first triangulate the PSLG with an arbitrary constrained triangulation and then refine that triangulation by adding additional vertices and edges. Some PSLGs require {Omega}({upsilon}{sup 2}) Steiner points in any triangulation achieving any largest angle bound less than {pi}. Hence the number of Steiner points added by our algorithm is within a log {upsilon} factor of worst case optimal. We note that our refinement algorithm works on arbitrary triangulations: Given any triangulation, we show how to refine it so that no angle is larger than 7{pi}/8. Our construction adds O(nm+nplog m) vertices and runs in time O(nm+nplog m) log(m+ p)), where n is the number of edges, m is one plus the number of obtuse angles, and p is one plus the number of holes and interior vertices in the original triangulation. A previously considered problem is refining a constrained triangulation of a simple polygon, where p = 1. For this problem we add O({upsilon}{sup 2}) Steiner points, which is within a constant factor of worst case optimal.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Grain Refinement of Permanent Mold Cast Copper Base Alloys
M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo
2005-04-01
Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.
California refining in balance as Phase 2 deadline draws near
Adler, K.
1996-01-01
The impact of California`s 1996 RFG program on US markets and its implications for refiners worldwide is analyzed. The preparations in the last few months before refiners must produce California Phase 2 RFG are addressed. Subsequent articles will consider the process improvements made by refiners, the early implementation of the program, and what has been learned about refining, gasoline distribution, environmental benefits and consumer acceptance that can be replicated around the world.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Refining and defining the Program Dependence Web
Campbell, P.L. ); Krishna, K.; Ballance, R.A. . Dept. of Computer Science)
1993-05-01
The Program Dependence Web (PDW) is an intermediate representation for a computer program, which can be interpreted under control-driven, data-driven or demand-driven disciplines. This document completes the definition for the PDW. This includes operational definitions for the nodes and arcs and a description of how PDWs are interpreted. The general structure for conditionals and loops is shown, accompanied by examples. The definition provided here is a refinement of the original one: a new node, the [beta] node,'' replaces the [mu] node, and the [eta][sup [Tau
Refining and defining the Program Dependence Web
Campbell, P.L.; Krishna, K.; Ballance, R.A.
1993-05-01
The Program Dependence Web (PDW) is an intermediate representation for a computer program, which can be interpreted under control-driven, data-driven or demand-driven disciplines. This document completes the definition for the PDW. This includes operational definitions for the nodes and arcs and a description of how PDWs are interpreted. The general structure for conditionals and loops is shown, accompanied by examples. The definition provided here is a refinement of the original one: a new node, the ``{beta} node,`` replaces the {mu} node, and the {eta}{sup {Tau}} node is eliminated.
Adaptive refinement tools for tetrahedral unstructured grids
NASA Technical Reports Server (NTRS)
Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)
2011-01-01
An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.
Refinement Of Hexahedral Cells In Euler Flow Computations
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1996-01-01
Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.
Empirical Analysis and Refinement of Expert System Knowledge Bases
Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen
1986-01-01
Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.
A refined orbit for the satellite of asteroid (107) Camilla
NASA Astrophysics Data System (ADS)
Pajuelo, Myriam Virginia; Carry, Benoit; Vachier, Frederic; Berthier, Jerome; Descamp, Pascal; Merline, William J.; Tamblyn, Peter M.; Conrad, Al; Storrs, Alex; Margot, Jean-Luc; Marchis, Frank; Kervella, Pierre; Girard, Julien H.
2015-11-01
The satellite of the Cybele asteroid (107) Camilla was discovered in March 2001 using the Hubble Space Telescope (Storrs et al., 2001, IAUC 7599). From a set of 23 positions derived from adaptive optics observations obtained over three years with the ESO VLT, Keck-II and Gemini-North telescopes, Marchis et al. (2008, Icarus 196) determined its orbit to be nearly circular.In the new work reported here, we compiled, reduced, and analyzed observations at 39 epochs (including the 23 positions previously analyzed) by adding additional observations taken from data archives: HST in 2001; Keck in 2002, 2003, and 2009; Gemini in 2010; and VLT in 2011. The present dataset hence contains twice as many epochs as the prior analysis and covers a time span that is three times longer (more than a decade).We use our orbit determination algorithm Genoid (GENetic Orbit IDentification), a genetic based algorithm that relies on a metaheuristic method and a dynamical model of the Solar System (Vachier et al., 2012, A&A 543). The method uses two models: a simple Keplerian model to minimize the search-time for an orbital solution, exploring a wide space of solutions; and a full N-body problem that includes the gravitational field of the primary asteroid up to 4th order.The orbit we derive fits all 39 observed positions of the satellite with an RMS residual of only milli-arcseconds, which corresponds to sub-pixel accuracy. We found the orbit of the satellite to be circular and roughly aligned with the equatorial plane of Camilla. The refined mass of the system is (12 ± 1) x 10^18 kg, for an orbital period of 3.71 days.We will present this improved orbital solution of the satellite of Camilla, as well as predictions for upcoming stellar occultation events.
Deformable elastic network refinement for low-resolution macromolecular crystallography
Schröder, Gunnar F.; Levitt, Michael; Brunger, Axel T.
2014-09-01
An overview of applications of the deformable elastic network (DEN) refinement method is presented together with recommendations for its optimal usage. Crystals of membrane proteins and protein complexes often diffract to low resolution owing to their intrinsic molecular flexibility, heterogeneity or the mosaic spread of micro-domains. At low resolution, the building and refinement of atomic models is a more challenging task. The deformable elastic network (DEN) refinement method developed previously has been instrumental in the determinion of several structures at low resolution. Here, DEN refinement is reviewed, recommendations for its optimal usage are provided and its limitations are discussed. Representative examples of the application of DEN refinement to challenging cases of refinement at low resolution are presented. These cases include soluble as well as membrane proteins determined at limiting resolutions ranging from 3 to 7 Å. Potential extensions of the DEN refinement technique and future perspectives for the interpretation of low-resolution crystal structures are also discussed.
Rapid Glass Refiner Development Program, Final report
1995-02-20
A rapid glass refiner (RGR) technology which could be applied to both conventional and advanced class melting systems would significantly enhance the productivity and the competitiveness of the glass industry in the United States. Therefore, Vortec Corporation, with the support of the US Department of Energy (US DOE) under Cooperative Agreement No. DE-FC07-90ID12911, conducted a research and development program for a unique and innovative approach to rapid glass refining. To provide focus for this research effort, container glass was the primary target from among the principal glass types based on its market size and potential for significant energy savings. Container glass products represent the largest segment of the total glass industry accounting for 60% of the tonnage produced and over 40% of the annual energy consumption of 232 trillion Btu/yr. Projections of energy consumption and the market penetration of advanced melting and fining into the container glass industry yield a potential energy savings of 7.9 trillion Btu/yr by the year 2020.
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement
Anninos, P; Fragile, P C; Salmonson, J D
2005-05-06
A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams
Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.
2003-11-04
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.
Hornung, R.D.
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
Global path planning of mobile robots using a memetic algorithm
NASA Astrophysics Data System (ADS)
Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen
2015-08-01
In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.
Proving refinement transformations for deriving high-assurance software
Winter, V.L.; Boyle, J.M.
1996-05-01
The construction of a high-assurance system requires some evidence, ideally a proof, that the system as implemented will behave as required. Direct proofs of implementations do not scale up well as systems become more complex and therefore are of limited value. In recent years, refinement-based approaches have been investigated as a means to manage the complexity inherent in the verification process. In a refinement-based approach, a high-level specification is converted into an implementation through a number of refinement steps. The hope is that the proofs of the individual refinement steps will be easier than a direct proof of the implementation. However, if stepwise refinement is performed manually, the number of steps is severely limited, implying that the size of each step is large. If refinement steps are large, then proofs of their correctness will not be much easier than a direct proof of the implementation. The authors describe an approach to refinement-based software development that is based on automatic application of refinements, expressed as program transformations. This automation has the desirable effect that the refinement steps can be extremely small and, thus, easy to prove correct. They give an overview of the TAMPR transformation system that the use for automated refinement. They then focus on some aspects of the semantic framework that they have been developing to enable proofs that TAMPR transformations are correctness preserving. With this framework, proofs of correctness for transformations can be obtained with the assistance of an automated reasoning system.