Adaptive mesh and algorithm refinement using direct simulation Monte Carlo
Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
An adaptive mesh refinement algorithm for the discrete ordinates method
Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.
1996-03-01
The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.
Incremental refinement of a multi-user-detection algorithm (II)
NASA Astrophysics Data System (ADS)
Vollmer, M.; Götze, J.
2003-05-01
Multi-user detection is a technique proposed for mobile radio systems based on the CDMA principle, such as the upcoming UMTS. While offering an elegant solution to problems such as intra-cell interference, it demands very significant computational resources. In this paper, we present a high-level approach for reducing the required resources for performing multi-user detection in a 3GPP TDD multi-user system. This approach is based on a displacement representation of the parameters that describe the transmission system, and a generalized Schur algorithm that works on this representation. The Schur algorithm naturally leads to a highly parallel hardware implementation using CORDIC cells. It is shown that this hardware architecture can also be used to compute the initial displacement representation. It is very beneficial to introduce incremental refinement structures into the solution process, both at the algorithmic level and in the individual cells of the hardware architecture. We detail these approximations and present simulation results that confirm their effectiveness.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
MISR research-aerosol-algorithm refinements for dark water retrievals
NASA Astrophysics Data System (ADS)
Limbacher, J. A.; Kahn, R. A.
2014-11-01
We explore systematically the cumulative effect of many assumptions made in the Multi-angle Imaging SpectroRadiometer (MISR) research aerosol retrieval algorithm with the aim of quantifying the main sources of uncertainty over ocean, and correcting them to the extent possible. A total of 1129 coincident, surface-based sun photometer spectral aerosol optical depth (AOD) measurements are used for validation. Based on comparisons between these data and our baseline case (similar to the MISR standard algorithm, but without the "modified linear mixing" approximation), for 558 nm AOD < 0.10, a high bias of 0.024 is reduced by about one-third when (1) ocean surface under-light is included and the assumed whitecap reflectance at 672 nm is increased, (2) physically based adjustments in particle microphysical properties and mixtures are made, (3) an adaptive pixel selection method is used, (4) spectral reflectance uncertainty is estimated from vicarious calibration, and (5) minor radiometric calibration changes are made for the 672 and 866 nm channels. Applying (6) more stringent cloud screening (setting the maximum fraction not-clear to 0.50) brings all median spectral biases to about 0.01. When all adjustments except more stringent cloud screening are applied, and a modified acceptance criterion is used, the Root-Mean-Square-Error (RMSE) decreases for all wavelengths by 8-27% for the research algorithm relative to the baseline, and is 12-36% lower than the RMSE for the Version 22 MISR standard algorithm (SA, with no adjustments applied). At 558 nm, 87% of AOD data falls within the greater of 0.05 or 20% of validation values; 62% of the 446 nm AOD data, and > 68% of 558, 672, and 866 nm AOD values fall within the greater of 0.03 or 10%. For the Ångström exponent (ANG), 67% of 1119 validation cases for AOD > 0.01 fall within 0.275 of the sun photometer values, compared to 49% for the SA. ANG RMSE decreases by 17% compared to the SA, and the median absolute error drops by
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
NASA Astrophysics Data System (ADS)
Lau, Erin-Ee-Lin; Chung, Wan-Young
A novel RSSI (Received Signal Strength Indication) refinement algorithm is proposed to enhance the resolution for indoor and outdoor real-time location tracking system. The proposed refinement algorithm is implemented in two separate phases. During the first phase, called the pre-processing step, RSSI values at different static locations are collected and processed to build a calibrated model for each reference node. Different measurement campaigns pertinent to each parameter in the model are implemented to analyze the sensitivity of RSSI. The propagation models constructed for each reference nodes are needed by the second phase. During the next phase, called the runtime process, real-time tracking is performed. Smoothing algorithm is proposed to minimize the dynamic fluctuation of radio signal received from each reference node when the mobile target is moving. Filtered RSSI values are converted to distances using formula calibrated in the first phase. Finally, an iterative trilateration algorithm is used for position estimation. Experiments relevant to the optimization algorithm are carried out in both indoor and outdoor environments and the results validated the feasibility of proposed algorithm in reducing the dynamic fluctuation for more accurate position estimation.
Improvement and Refinement of the GPS/MET Data Analysis Algorithm
NASA Technical Reports Server (NTRS)
Herman, Benjamin M.
2003-01-01
The GPS/MET project was a satellite-to-satellite active microwave atmospheric limb sounder using the Global Positioning System transmitters as signal sources. Despite its remarkable success, GPS/MET could not independently sense atmospheric water vapor and ozone. Additionally the GPS/MET data retrieval algorithm needs to be further improved and refined to enhance the retrieval accuracies in the lower tropospheric region and the upper stratospheric region. The objectives of this proposal were to address these 3 problem areas.
Efficient modularity optimization by multistep greedy algorithm and vertex mover refinement.
Schuetz, Philipp; Caflisch, Amedeo
2008-04-01
Identifying strongly connected substructures in large networks provides insight into their coarse-grained organization. Several approaches based on the optimization of a quality function, e.g., the modularity, have been proposed. We present here a multistep extension of the greedy algorithm (MSG) that allows the merging of more than one pair of communities at each iteration step. The essential idea is to prevent the premature condensation into few large communities. Upon convergence of the MSG a simple refinement procedure called "vertex mover" (VM) is used for reassigning vertices to neighboring communities to improve the final modularity value. With an appropriate choice of the step width, the combined MSG-VM algorithm is able to find solutions of higher modularity than those reported previously. The multistep extension does not alter the scaling of computational cost of the greedy algorithm. PMID:18517695
NASA Astrophysics Data System (ADS)
Lloyd, Lewis John
This work focused on developing a novel method for solving the nonlinear partial differential equations associated with thermal-hydraulic safety analysis software. Traditional methods involve solving large systems of nonlinear equations. One class of methods linearizes the nonlinear equations and attempts to minimize the nonlinear truncation error with timestep size selection. These linearized methods are characterized by low computational cost but reduced accuracy. Another class resolves those nonlinearities by using an iterative nonlinear refinement technique. However, these iterative methods are computationally expensive when multiple iterates are required to resolve the nonlinearities. These two paradigms stand at the opposite ends of a spectrum, and the middle ground had yet to be investigated. This research sought to find that middle ground, a balance between the competing incentives of computational cost and accuracy, by creating a hybrid method: a spatially-selective, nonlinear refinement (SNR) algorithm. As part of this work, the two-phase, three-field software COBRA was converted from a linearized semi-implicit solver to a nonlinearly convergent solver; an operator-based scaling that provides a physically meaningful convergence measure was developed and implemented; and the SNR algorithm was developed to enable a subdomain of the simulation to be subjected to multiple nonlinear iterates while maintaining global consistency. By selecting those areas of the computational domain where nonlinearities are expected to be high and subjecting only them to multiple nonlinear iterations, the accuracy of the nonlinear solver may be obtained without its associated computational cost.
NASA Astrophysics Data System (ADS)
Northrup, Scott A.
A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural
2014-01-01
Background Developing suitable methods for the identification of protein complexes remains an active research area. It is important since it allows better understanding of cellular functions as well as malfunctions and it consequently leads to producing more effective cures for diseases. In this context, various computational approaches were introduced to complement high-throughput experimental methods which typically involve large datasets, are expensive in terms of time and cost, and are usually subject to spurious interactions. Results In this paper, we propose ProRank+, a method which detects protein complexes in protein interaction networks. The presented approach is mainly based on a ranking algorithm which sorts proteins according to their importance in the interaction network, and a merging procedure which refines the detected complexes in terms of their protein members. ProRank + was compared to several state-of-the-art approaches in order to show its effectiveness. It was able to detect more protein complexes with higher quality scores. Conclusions The experimental results achieved by ProRank + show its ability to detect protein complexes in protein interaction networks. Eventually, the method could potentially identify previously-undiscovered protein complexes. The datasets and source codes are freely available for academic purposes at http://faculty.uaeu.ac.ae/nzaki/Research.htm. PMID:24944073
NASA Astrophysics Data System (ADS)
Shaaban, Khaled M.; Schalkoff, Robert J.
1995-06-01
Most image processing and feature extraction algorithms consist of a composite sequence of operations to achieve a specific task. Overall algorithm capability depends upon the individual performance of each of these operations. This performance, in turn, is usually controlled by a set of a priori known (or estimated) algorithm parameters. The overall design of an image processing algorithm involves both the selections of the sub-algorithm sequence and the required operating parameters, and is done using the best available knowledge of the problem and the experience of the algorithm designer. This paper presents a dynamic and adaptive image processing algorithm development structure. The implementation of the dynamic algorithm structure requires solving of a classification problem at decision nodes in an algorithm graph, A. The number of required classifiers equals the number of decision nodes. There are several learning techniques that could be used to implement any of these classifiers. Each of these techniques, in turn, requires a training set. This training set could be generated using a modified form of the dynamic algorithm. In this modified form, a human operator interface replaces all of the decision nodes. An optimization procedure (Nelder-Mead) is employed to assist the operator in finding the best parameter values. Examples of the approach using real-world imagery are shown.
NASA Astrophysics Data System (ADS)
Li, Lin; Kuai, Xi
2014-11-01
Generating a triangulated irregular network (TIN) from contour maps is the most commonly used approach to build Digital Elevation Models (DEMs) for geo-databases. A well-known problem when building a TIN is that many pan slope triangles (or PSTs) may emerge from the vertices of contour lines. Those triangles should be eliminated from the TIN by adding additional terrain points when refining the local TIN. There are many methods and algorithms available for eliminating PSTs in a TIN, but their performances may not satisfy the requirements of some applications where efficiency rather than completeness is critical. This paper investigates commonly-used processes for eliminating PSTs and puts forward a new algorithm, referred to as ‘dichotomizing' interpolation algorithm, to achieve a higher efficiency than from the conventional ‘skeleton' extraction algorithm. Its better performance comes from reducing the number of the additional interpolated points to only those that are sufficient and necessary for eliminating PSTs. This goal is reached by dichotomizing PST polygons iteratively and locating additional points in the geometric centers of the polygons. This study verifies, both theoretically and experimentally, the higher efficiency of this new dichotomizing algorithm and also demonstrates its reliability for building DEMs in terms of accuracy for estimating terrain surface elevation.
A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics
Puckett, E.G. ); Saltzman, J.S. )
1991-08-12
Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.
Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science
Egger, Jan
2014-01-01
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
A node-centered local refinement algorithm for poisson's equation in complex geometries
McCorquodale, Peter; Colella, Phillip; Grote, David P.; Vay, Jean-Luc
2004-05-04
This paper presents a method for solving Poisson's equation with Dirichlet boundary conditions on an irregular bounded three-dimensional region. The method uses a nodal-point discretization and adaptive mesh refinement (AMR) on Cartesian grids, and the AMR multigrid solver of Almgren. The discrete Laplacian operator at internal boundaries comes from either linear or quadratic (Shortley-Weller) extrapolation, and the two methods are compared. It is shown that either way, solution error is second order in the mesh spacing. Error in the gradient of the solution is first order with linear extrapolation, but second order with Shortley-Weller. Examples are given with comparison with the exact solution. The method is also applied to a heavy-ion fusion accelerator problem, showing the advantage of adaptivity.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Genetic refinement of cloud-masking algorithms for the multi-spectral thermal imager (MTI)
Hirsch, K. L.; Davis, A. B.; Harvey, N. R.; Rohde, C. A.; Brumby, Steven P.
2001-01-01
The Multi-spectral Thermal Imager (MTI) is a high-performance remote-sensing satellite designed, owned and operated by the U.S. Department of Energy, with a dual mission in environmental studies and in nonproliferation. It has enhanced spatial and radiometric resolutions and state-of-the-art calibration capabilities. This instrumental development puts a new burden on retrieval algorithm developers to pass this accuracy on to the inferred geophysical parameters. In particular, the atmospheric correction scheme assumes the intervening atmosphere will be modeled as a plane-parallel horizontally-homogeneous medium. A single dense-enough cloud in view of the ground target can easily offset reality from the calculations, hence the need for a reliable cloud-masking algorithm. Pixel-scale cloud detection relies on the simple facts that clouds are generally whiter, brighter, and colder than the ground below; spatially, dense clouds are generally large on some scale. This is a good basis for searching multispectral datacubes for cloud signatures. However, the resulting cloud mask can be very sensitive to the choice of thresholds in whiteness, brightness, temperature, and connectivity. We have used a genetic algorithm trained on (MODIS Airborne Simulator-based) simulated MTI data to design a cloud-mask. Its performance is compared quantitatively to hand-drawn training data and to the EOS/Terra MODIS cloud mask.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
Refined Upper Tropospheric Water Vapor Retrieval Algorithm for GOES-8 Imagery
NASA Astrophysics Data System (ADS)
Molnar, G. I.; McMillan, W. W.; Lightner, K.; McCourt, M.
2002-05-01
Water vapor is the most important greenhouse gas, yet there is still a large uncertainty how would it affect global climate change. It is not well known, for example, whether global warming will initiate an overall moistening or drying of the tropical upper troposphere. Unfortunately, longer term, reliable observations of upper tropospheric humidity [UTH], which has significant control on outgoing longwave radiation, are few and far between. On one hand, the older radiosonde observations are very unreliable in the upper troposhere. On the other hand, satellite observation based UTH retrievals are still in their infancy. Development of satellite-based UTH retrieval schemes requires reliable "ground truth" measurements and accurate radiative transfer calculations. Here, we extend/update the Soden and Bretherton [1993] UTH-retrieval method for GOES-8 to correspond more accurately with recent radiosonde measurements and using line-by-line radiative transfer calculations to model the satellite-observed radiances. We make use of the high quality UTH profiles obtained during the CAMEX-4 measurement campaign over the Northwestern Caribbean during Aug. 16 - Sept. 24 2001. Co-located GOES-8 6.7 micron and 11 micron channel radiances are then used to fine-tune the satellite-based UTH retrieval algorithm. The satellite radiances are also modeled by using the "KCARTA" line-by-line radiative transfer code developed at UMBC. Finally, we update the GOES-8 UTH-retrieval scheme coefficients to reflect the usage of better "ground truth" and improved radiative transfer calculations, as well as the potential detoriation of the (uncalibrated) satellite radiances.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
An updated partiality model and post-refinement algorithm for XFEL snapshot diffraction data is presented and confirmed by observing anomalous density for S atoms at an X-ray wavelength of 1.3 Å. Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R{sub split} value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will
Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper
ERIC Educational Resources Information Center
Pérez-Leroux, Ana T.
2014-01-01
In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…
Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper
ERIC Educational Resources Information Center
Slabakova, Roumyana
2014-01-01
This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…
Wake Up, It Is 2013! Commentary on Luiz Amaral and Tom Roeper's Article
ERIC Educational Resources Information Center
Muysken, Pieter
2014-01-01
This article examines the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roeper in the present issue and presents a critique of the research that went into the theory. Topics discussed include the allegation that the bilinguals and second language learners in the original article are primarily students in an academic setting, Amaral…
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-01-01
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R split value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases. PMID:26057680
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definitionmore » of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating theRsplitvalue) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases.« less
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data.
Ginn, Helen Mary; Brewster, Aaron S; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M; Sauter, Nicholas K; Sutton, Geoff; Stuart, David Ian
2015-06-01
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R(split) value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases. PMID:26057680
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the
NASA Astrophysics Data System (ADS)
Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.
2015-09-01
The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.
Vellieux, F M
1998-01-01
A comparison has been made of two methods for electron-density map improvement by the introduction of atomicity, namely the iterative skeletonization procedure of the CCP4 program DM [Cowtan & Main (1993). Acta Cryst. D49, 148-157] and the pseudo-atom introduction followed by the refinement protocol in the program suite DEMON/ANGEL [Vellieux, Hunt, Roy & Read (1995). J. Appl. Cryst. 28, 347-351]. Tests carried out using the 3.0 A resolution electron density resulting from iterative 12-fold non-crystallographic symmetry averaging and solvent flattening for the Pseudomonas aeruginosa ornithine transcarbamoylase [Villeret, Tricot, Stalon & Dideberg (1995). Proc. Natl Acad. Sci. USA, 92, 10762-10766] indicate that pseudo-atom introduction followed by refinement performs much better than iterative skeletonization: with the former method, a phase improvement of 15.3 degrees is obtained with respect to the initial density modification phases. With iterative skeletonization a phase degradation of 0.4 degrees is obtained. Consequently, the electron-density maps obtained using pseudo-atom phases or pseudo-atom phases combined with density-modification phases are much easier to interpret. These tests also show that for ornithine transcarbamoylase, where 12-fold non-crystallographic symmetry is present in the P1 crystals, G-function coupling leads to the simultaneous decrease of the conventional R factor and of the free R factor, a phenomenon which is not observed when non-crystallographic symmetry is absent from the crystal. The method is far less effective in such a case, and the results obtained suggest that the map sorting followed by refinement stage should be by-passed to obtain interpretable electron-density distributions. PMID:9761819
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Russo, Giulia
2016-07-01
Ben Amar and Bianca valuably reviewed the state of the art of fibrosis modeling approach scenario [1]. Each paragraph identifies and examines a specific theoretical tool according to their scale level (molecular, cellular or tissue). For each of them it is shown the area of application, along with a clear description of strong and weak points. This critical analysis denotes the necessity to develop a more suitable and original multiscale approach in the future [2].
Chadha, N; Jasuja, H; Kaur, M; Singh Bahia, M; Silakari, O
2014-01-01
Phosphoinositide 3-kinase alpha (PI3Kα) is a lipid kinase involved in several cellular functions such as cell growth, proliferation, differentiation and survival, and its anomalous regulation leads to cancerous conditions. PI3Kα inhibition completely blocks the cancer signalling pathway, hence it can be explored as an important therapeutic target for cancer treatment. In the present study, docking analysis of 49 selective imidazo[1,2-a]pyrazine inhibitors of PI3Kα was carried out using the QM-Polarized ligand docking (QPLD) program of the Schrödinger software, followed by the refinement of receptor-ligand conformations using the Hybrid Monte Carlo algorithm in the Liaison program, and alignment of refined conformations of inhibitors was utilized for the development of an atom-based 3D-QSAR model in the PHASE program. Among the five generated models, the best model was selected corresponding to PLS factor 2, displaying the highest value of Q(2)test (0.650). The selected model also displayed high values of r(2)train (0.917), F-value (166.5) and Pearson-r (0.877) and a low value of SD (0.265). The contour plots generated for the selected 3D-QSAR model were correlated with the results of docking simulations. Finally, this combined information generated from 3D-QSAR and docking analysis was used to design new congeners. PMID:24601789
Parallel adaptive mesh refinement within the PUMAA3D Project
NASA Technical Reports Server (NTRS)
Freitag, Lori; Jones, Mark; Plassmann, Paul
1995-01-01
To enable the solution of large-scale applications on distributed memory architectures, we are designing and implementing parallel algorithms for the fundamental tasks of unstructured mesh computation. In this paper, we discuss efficient algorithms developed for two of these tasks: parallel adaptive mesh refinement and mesh partitioning. The algorithms are discussed in the context of two-dimensional finite element solution on triangular meshes, but are suitable for use with a variety of element types and with h- or p-refinement. Results demonstrating the scalability and efficiency of the refinement algorithm and the quality of the mesh partitioning are presented for several test problems on the Intel DELTA.
Mead, T.C.; Sequeira, A.J.; Smith, B.F.
1981-10-13
An improved process is described for solvent refining lubricating oil base stocks from petroleum fractions containing both aromatic and nonaromatic constituents. The process utilizes n-methyl-2-pyrrolidone as a selective solvent for aromatic hydrocarbons wherein the refined oil fraction and the extract fraction are freed of final traces of solvent by stripping with gaseous ammonia. The process has several advantages over conventional processes including a savings in energy required for the solvent refining process, and reduced corrosion of the process equipment.
Parametric Rietveld refinement
Stinton, Graham W.; Evans, John S. O.
2007-01-01
In this paper the method of parametric Rietveld refinement is described, in which an ensemble of diffraction data collected as a function of time, temperature, pressure or any other variable are fitted to a single evolving structural model. Parametric refinement offers a number of potential benefits over independent or sequential analysis. It can lead to higher precision of refined parameters, offers the possibility of applying physically realistic models during data analysis, allows the refinement of ‘non-crystallographic’ quantities such as temperature or rate constants directly from diffraction data, and can help avoid false minima. PMID:19461841
Refining quadrilateral and brick element meshes
Schneiders, R.; Debye, J.
1995-12-31
We consider the problem of refining unstructured quadrilateral and brick element meshes. We present an algorithm which is a generalization of an algorithm developed by Cheng et. al. for structured quadrilateral element meshes. The problem is solved for the two-dimensional case. Concerning three dimensions we present a solution for some special cases and a general solution that introduces tetrahedral and pyramidal transition elements.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Orthogonal polynomials for refinable linear functionals
NASA Astrophysics Data System (ADS)
Laurie, Dirk; de Villiers, Johan
2006-12-01
A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.
Woodle, R.A.
1982-04-20
A dual solvent refining process is claimed for solvent refining petroleum based lubricating oil stocks with n-methyl-2-pyrrolidone as selective solvent for aromatic oils wherein a highly paraffinic oil having a narrow boiling range approximating the boiling point of n-methyl-2-pyrrolidone is employed as a backwash solvent. The process of the invention results in an increased yield of refined lubricating oil stock of a predetermined quality and simplifies separation of the solvents from the extract and raffinate oil fractions.
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
NASA Astrophysics Data System (ADS)
Napoli, Gaetano
2016-07-01
The term fibrosis refers to the development of fibrous connective tissue, in an organ or in a tissue, as a reparative response to injury or damage. The review article by Ben Amar and Bianca [1] proposes a unified multiscale approach for the modeling of fibrosis, accounting for phenomena occurring at different spatial scales (molecular, cellular and macroscopic). The main aim is to define a general unified framework able to describe the mechanisms, not yet completely understood, that trigger physiological and pathological fibrosis.
NASA Astrophysics Data System (ADS)
Copur, Yalcin
This study compares the modified kraft process, polysulfide pulping, one of the methods to obtain higher pulp yield, with conventional kraft method. More specifically, the study focuses on the refining effects of polysulfide pulp, which is an area with limited literature. Physical, mechanical and chemical properties of kraft and polysulfide pulps (4% elemental sulfur addition to cooking digester) cooked under the same conditions were studied as regards to their behavior under various PFI refining (0, 3000, 6000, 9000 revs.). Polysulfide (PS) pulping, compared to the kraft method, resulted in higher pulp yield and higher pulp kappa number. Polysulfide also gave pulp having higher tensile and burst index. However, the strength of polysulfide pulp, tear index at a constant tensile index, was found to be 15% lower as compared to the kraft pulp. Refining studies showed that moisture holding ability of chemical pulps mostly depends on the chemical nature of the pulp. Refining effects such as fibrillation and fine content did not have a significant effect on the hygroscopic behavior of chemical pulp.
REFINE WETLAND REGULATORY PROGRAM
The Tribes will work toward refining a regulatory program by taking a draft wetland conservation code with permitting incorporated to TEB for review. Progress will then proceed in developing a permit tracking system that will track both Tribal and fee land sites within reservati...
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
Parallel tetrahedral mesh refinement with MOAB.
Thompson, David C.; Pebay, Philippe Pierre
2008-12-01
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
Number systems, α-splines and refinement
NASA Astrophysics Data System (ADS)
Zube, Severinas
2004-12-01
This paper is concerned with the smooth refinable function on a plane relative with complex scaling factor . Characteristic functions of certain self-affine tiles related to a given scaling factor are the simplest examples of such refinable function. We study the smooth refinable functions obtained by a convolution power of such charactericstic functions. Dahlke, Dahmen, and Latour obtained some explicit estimates for the smoothness of the resulting convolution products. In the case α=1+i, we prove better results. We introduce α-splines in two variables which are the linear combination of shifted basic functions. We derive basic properties of α-splines and proceed with a detailed presentation of refinement methods. We illustrate the application of α-splines to subdivision with several examples. It turns out that α-splines produce well-known subdivision algorithms which are based on box splines: Doo-Sabin, Catmull-Clark, Loop, Midedge and some -subdivision schemes with good continuity. The main geometric ingredient in the definition of α-splines is the fundamental domain (a fractal set or a self-affine tile). The properties of the fractal obtained in number theory are important and necessary in order to determine two basic properties of α-splines: partition of unity and the refinement equation.
Issues in adaptive mesh refinement
Dai, William Wenlong
2009-01-01
In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.
Stacey, J.S.; Stoeser, D.B.; Greenwood, W.R.; Fischer, L.B.
1984-01-01
U/Pb zircon model ages for 11 major units from this region indicate three stages of evolution: 1) plate convergence, 2) plate collision and 3) post-orogenic intracratonic activity. Convergence occurred between the western Afif and eastern Ar Rayn plates that were separated by oceanic crust. Remnants of crust now comprise the ophiolitic complexes of the Urd group; the oldest plutonic unit studied is from one such complex, and gave an age of 694-698 m.y., while detrital zircons from an intercalated sedimentary formation were derived from source rocks with a mean age of 710 m.y. Plate convergence was terminated by collision of the two plates during the Al Amar orogeny which began at -670 m.y.; during collision, the Urd group rocks were deformed and in part obducted on to one or other of the plates. Synorogenic granitic rocks were intruded from 670 to 640 m.y., followed from 640 to 630 m.y. by unfoliated dioritic plutons emplaced in the Ar Rayn block.-R.A.H.
Worldwide refining and gas processing directory
1999-11-01
Statistics are presented on the following: US refining; Canada refining; Europe refining; Africa refining; Asia refining; Latin American refining; Middle East refining; catalyst manufacturers; consulting firms; engineering and construction; US gas processing; international gas processing; plant maintenance providers; process control and simulation systems; and trade associations.
Minimally refined biomass fuel
Pearson, Richard K.; Hirschfeld, Tomas B.
1984-01-01
A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-01-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
NASA Astrophysics Data System (ADS)
Kolev, Mikhail K.
2016-07-01
Over the last decades the collaboration between scientists from biology, medicine and pharmacology on one side and scholars from mathematics, physics, mechanics and computer science on the other has led to better understanding of the properties of living systems, the mechanisms of their functioning and interactions with the environment and to the development of new therapies for various disorders and diseases. The target paper [1] by Ben Amar and Bianca presents a detailed description of the research methods and techniques used by biomathematicians, bioinformaticians, biomechanicians and biophysicists for studying biological systems, and in particular in the context of pathological fibrosis.
Replacement, reduction and refinement.
Flecknell, Paul
2002-01-01
In 1959, William Russell and Rex Burch published "The Principles of Humane Experimental Technique". They proposed that if animals were to be used in experiments, every effort should be made to Replace them with non-sentient alternatives, to Reduce to a minimum the number of animals used, and to Refine experiments which used animals so that they caused the minimum pain and distress. These guiding principles, the "3 Rs" of animal research, were initially given little attention. Gradually, however, they have become established as essential considerations when animals are used in research. They have influenced new legislation aimed at controlling the use of experimental animals, and in the United Kingdom they have become formally incorporated into the Animal (Scientific) Procedures Act. The three principles, of Replacement, Reduction and Refinement, have also proven to be an area of common ground for research workers who use animals, and those who oppose their use. Scientists, who accept the need to use animals in some experiments, would also agree that it would be preferable not to use animals. If animals were to be used, as few as possible should be used and they should experience a minimum of pain or distress. Many of those who oppose animal experimentation, would also agree that until animal experimentation is stopped, Russell and Burch's 3Rs provide a means to improve animal welfare. It has also been recognised that adoption of the 3Rs can improve the quality of science. Appropriately designed experiments that minimise variation, provide standardised optimum conditions of animals care and minimise unnecessary stress or pain, often yield better more reliable data. Despite the progress made as a result of attention to these principles, several major problems have been identified. When replacing animals with alternative methods, it has often proven difficult to formally validate the alternative. This has proven a particular problem in regulatory toxicology
Thailand: refining cultural values.
Ratanakul, P
1990-01-01
In the second of a set of three articles concerned with "bioethics on the Pacific Rim," Ratanakul, director of a research center for Southeast Asian cultures in Thailand, provides an overview of bioethical issues in his country. He focuses on four issues: health care allocation, AIDS, determination of death, and euthanasia. The introduction of Western medicine into Thailand has brought with it a multitude of ethical problems created in part by tension between Western and Buddhist values. For this reason, Ratanakul concludes that "bioethical enquiry in Thailand must not only examine ethical dilemmas that arise in the actual practice of medicine and research in the life sciences, but must also deal with the refinement and clarification of applicable Thai cultural and moral values." PMID:2318624
Towards automated crystallographic structure refinement with phenix.refine.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Echols, Nathaniel; Headd, Jeffrey J; Moriarty, Nigel W; Mustyakimov, Marat; Terwilliger, Thomas C; Urzhumtsev, Alexandre; Zwart, Peter H; Adams, Paul D
2012-04-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. PMID:22505256
Towards automated crystallographic structure refinement with phenix.refine
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Echols, Nathaniel; Headd, Jeffrey J.; Moriarty, Nigel W.; Mustyakimov, Marat; Terwilliger, Thomas C.; Urzhumtsev, Alexandre; Zwart, Peter H.; Adams, Paul D.
2012-01-01
phenix.refine is a program within the PHENIX package that supports crystallographic structure refinement against experimental data with a wide range of upper resolution limits using a large repertoire of model parameterizations. It has several automation features and is also highly flexible. Several hundred parameters enable extensive customizations for complex use cases. Multiple user-defined refinement strategies can be applied to specific parts of the model in a single refinement run. An intuitive graphical user interface is available to guide novice users and to assist advanced users in managing refinement projects. X-ray or neutron diffraction data can be used separately or jointly in refinement. phenix.refine is tightly integrated into the PHENIX suite, where it serves as a critical component in automated model building, final structure refinement, structure validation and deposition to the wwPDB. This paper presents an overview of the major phenix.refine features, with extensive literature references for readers interested in more detailed discussions of the methods. PMID:22505256
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.
1998-06-01
The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate
Refinement of protein dynamic structure: normal mode refinement.
Kidera, A; Go, N
1990-01-01
An x-ray crystallographic refinement method, referred to as the normal mode refinement, is proposed. The Debye-Waller factor is expanded in terms of the effective normal modes whose amplitudes and eigenvectors are experimentally determined by the crystallographic refinement. In contrast to the conventional method, the atomic motions are treated generally as anisotropic and concerted. This method is assessed by using the simulated x-ray data given by a Monte Carlo simulation of human lysozyme. In this article, we refine the dynamic structure by fixing the average static structure to exact coordinates. It is found that the normal mode refinement, using a smaller number of variables, gives a better R factor and more information on the dynamics (anisotropy and collectivity in the motion). Images PMID:2339115
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
Gradualness facilitates knowledge refinement.
Rada, R
1985-05-01
To facilitate knowledge refinement, a system should be designed so that small changes in the knowledge correspond to small changes in the function or performance of the system. Two sets of experiments show the value of small, heuristically guided changes in a weighted rule base. In the first set, the ordering among numbers (reflecting certainties) makes their manipulation more straightforward than the manipulation of relationships. A simple credit assignment and weight adjustment strategy for improving numbers in a weighted, rule-based expert system is presented. In the second set, the rearrangement of predicates benefits from additional knowledge about the ``ordering'' among predicates. A third set of experiments indicates the importance of the proper level of granularity when augmenting a knowledge base. Augmentation of one knowledge base by analogical reasoning from another knowledge base did not work with only binary relationships, but did succeed with ternary relationships. To obtain a small improvement in the knowledge base, a substantial amount of structure had to be treated as a unit. PMID:21869290
High resolution single particle refinement in EMAN2.1.
Bell, James M; Chen, Muyuan; Baldwin, Philip R; Ludtke, Steven J
2016-05-01
EMAN2.1 is a complete image processing suite for quantitative analysis of grayscale images, with a primary focus on transmission electron microscopy, with complete workflows for performing high resolution single particle reconstruction, 2-D and 3-D heterogeneity analysis, random conical tilt reconstruction and subtomogram averaging, among other tasks. In this manuscript we provide the first detailed description of the high resolution single particle analysis pipeline and the philosophy behind its approach to the reconstruction problem. High resolution refinement is a fully automated process, and involves an advanced set of heuristics to select optimal algorithms for each specific refinement task. A gold standard FSC is produced automatically as part of refinement, providing a robust resolution estimate for the final map, and this is used to optimally filter the final CTF phase and amplitude corrected structure. Additional methods are in-place to reduce model bias during refinement, and to permit cross-validation using other computational methods. PMID:26931650
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Madani, Safoura; Coors, Anja; Haddioui, Abdelmajid; Ksibi, Mohamed; Pereira, Ruth; Paulo Sousa, José; Römbke, Jörg
2015-09-01
Mining activity is an important economic activity in several North Atlantic Treaty Organization (NATO) and North African countries. Within their territory derelict or active mining explorations represent risks to surrounding ecosystems, but engineered-based remediation processes are usually too expensive to be an option for the reclamation of these areas. A project funded by NATO was performed, with the aim of finding a more eco-friendly solution for reclamation of these areas. As part of an overall risk assessment, the risk of contaminated soils to selected soil organisms was evaluated. The main question addressed was: Does the metal-contaminated soils from a former iron mine located at Ait Amar (Morocco),which was abandoned in the mid-Sixties, affect the reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer)? Soil samples were taken at 20 plots along four transects covering the mine area and at a reference site about 15km away from the mine. The soils were characterized pedologically and chemically, which showed a heterogeneous pattern of metal contamination (mainly cadmium, copper, and chromium, sometimes at concentrations higher than European soil trigger values). The reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer) was studied using standard laboratory tests according to OECD guidelines 220 (2004) and 226 (2008). The number of juveniles of E. bigeminus was reduced at several plots with high concentrations of Cd or Cu (the latter in combination with low pH values). There was nearly no effect of the metal contaminated soils on the reproduction of H. aculeifer. The overall lack of toxicity at the majority of the studied plots is probably caused by the low availability of the metals in these soils unless soil pH was very low. Different exposure pathways are likely responsible for the different reaction of mites and enchytraeids (hard-bodied versus soft-bodied organisms). The
Deformable complex network for refining low-resolution X-ray structures
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint with the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.
Toward a consistent framework for high order mesh refinement schemes in numerical relativity
NASA Astrophysics Data System (ADS)
Mongwane, Bishop
2015-05-01
It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothly match the phase velocity of coarser grids. We apply the method to test problems involving propagating waves and show a significant reduction in spurious reflections.
Error bounds from extra precise iterative refinement
Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason
2005-02-07
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors. PMID:25567568
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Refining the shifted topological vertex
Drissi, L. B.; Jehjouh, H.; Saidi, E. H.
2009-01-15
We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.
Ideal Downward Refinement in the EL Description Logic
NASA Astrophysics Data System (ADS)
Lehmann, Jens; Haase, Christoph
With the proliferation of the Semantic Web, there has been a rapidly rising interest in description logics, which form the logical foundation of the W3C standard ontology language OWL. While the number of OWL knowledge bases grows, there is an increasing demand for tools assisting knowledge engineers in building up and maintaining their structure. For this purpose, concept learning algorithms based on refinement operators have been investigated. In this paper, we provide an ideal refinement operator for the description logic EL and show that it is computationally feasible on large knowledge bases.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
NASA Astrophysics Data System (ADS)
Kachapova, Farida
2016-07-01
Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.
NASA Astrophysics Data System (ADS)
Guerrini, Luca
2016-07-01
Martine Ben Amar and Carlo Bianca have written a valuable paper [1], which is a timely review of the different theoretical tools for the modeling of physiological and pathological fibrosis existing in the literature. The review [1] is written with clarity and in a simple way, which makes it understandable to a wide audience. The author presents an exhaustive exposition of the interplay between the different scholars which works in the modeling of fibrosis diseases and a survey of the main theoretical approaches, among others, ODE-based models, PDE-based models, models with internal structure, mechanics of continuum approach, agent-based models. A critical analysis discusses their applicability, including advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Model Refinement Using Eigensystem Assignment
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
2000-01-01
IA novel approach for the refinement of finite-element-based analytical models of flexible structures is presented. The proposed approach models the possible refinements in the mass, damping, and stiffness matrices of the finite element model in the form of a constant gain feedback with acceleration, velocity, and displacement measurements, respectively. Once the free elements of the structural matrices have been defined, the problem of model refinement reduces to obtaining position, velocity, and acceleration gain matrices with appropriate sparsity that reassign a desired subset of the eigenvalues of the model, along with partial mode shapes, from their baseline values to those obtained from system identification test data. A sequential procedure is used to assign one conjugate pair of eigenvalues at each step using symmetric output feedback gain matrices, and the eigenvectors are partially assigned, while ensuring that the eigenvalues assigned in the previous steps are not disturbed. The procedure can also impose that gain matrices be dissipative to guarantee the stability of the refined model. A numerical example, involving finite element model refinement for a structural testbed at NASA Langley Research Center (Controls-Structures-Interaction Evolutionary model) is presented to demonstrate the feasibility of the proposed approach.
Zone refining of plutonium metal
Blau, M.S.
1994-08-01
The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, {delta}-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Evolutionary optimization of a Genetically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Tinker, Michael L.; Dozier, Gerry
2005-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This paper will present a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: Genetic Algorithms and Differential Evolution to successfully optimize a benchmark structural optimization problem. An non-traditional solution to the benchmark problem is presented in this paper, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. . Dept. of Computer Science Oak Ridge National Lab., TN )
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
Multigrid for locally refined meshes
Shapira, Y.
1999-12-01
A multilevel method for the solution of finite element schemes on locally refined meshes is introduced. For isotropic diffusion problems, the condition number of the two-level method is bounded independently of the mesh size and the discontinuities in the diffusion coefficient. The curves of discontinuity need not be aligned with the coarse mesh. Indeed, numerical applications with 10 levels of local refinement yield a rapid convergence of the corresponding 10-level, multigrid V-cycle and other multigrid cycles which are more suitable for parallelism even when the discontinuities are invisible on most of the coarse meshes.
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
Structured adaptive mesh refinement on the connection machine
Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )
1993-01-01
Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.
Method for refining contaminated iridium
Heshmatpour, B.; Heestand, R.L.
1982-08-31
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Method for refining contaminated iridium
Heshmatpour, Bahman; Heestand, Richard L.
1983-01-01
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
ERIC Educational Resources Information Center
Hazelton, Alexander E.; And Others
Through joint planning with a number of school districts and the Region X Title I Technical Assistance Center, and with the help of a Title I Refinement grant, Alaska has developed a system of data storage and retrieval using microcomputers that assists small school districts in the evaluation and reporting of their Title I programs. Although this…
Vacuum Refining of Molten Silicon
NASA Astrophysics Data System (ADS)
Safarian, Jafar; Tangstad, Merete
2012-12-01
Metallurgical fundamentals for vacuum refining of molten silicon and the behavior of different impurities in this process are studied. A novel mass transfer model for the removal of volatile impurities from silicon in vacuum induction refining is developed. The boundary conditions for vacuum refining system—the equilibrium partial pressures of the dissolved elements and their actual partial pressures under vacuum—are determined through thermodynamic and kinetic approaches. It is indicated that the vacuum removal kinetics of the impurities is different, and it is controlled by one, two, or all the three subsequent reaction mechanisms—mass transfer in a melt boundary layer, chemical evaporation on the melt surface, and mass transfer in the gas phase. Vacuum refining experimental results of this study and literature data are used to study the model validation. The model provides reliable results and shows correlation with the experimental data for many volatile elements. Kinetics of phosphorus removal, which is an important impurity in the production of solar grade silicon, is properly predicted by the model, and it is observed that phosphorus elimination from silicon is significantly increased with increasing process temperature.
GRAIN REFINEMENT OF URANIUM BILLETS
Lewis, L.
1964-02-25
A method of refining the grain structure of massive uranium billets without resort to forging is described. The method consists in the steps of beta- quenching the billets, annealing the quenched billets in the upper alpha temperature range, and extrusion upset of the billets to an extent sufficient to increase the cross sectional area by at least 5 per cent. (AEC)
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations. PMID:26723635
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Refining image segmentation by integration of edge and region data
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Tilton, James C.
1992-01-01
An iterative parallel region growing (IPRG) algorithm previously developed by Tilton (1989) produces hierarchical segmentations of images from finer to coarser resolution. An ideal segmentation does not always correspond to one single iteration but to several different ones, each one producing the 'best' result for a separate part of the image. With the goal of finding this ideal segmentation, the results of the IPRG algorithm are refined by utilizing some additional information, such as edge features, and by interpreting the tree of hierarchical regions.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...
Time Critical Isosurface Refinement and Smoothing
Pascucci, V.; Bajaj, C.L.
2000-07-10
Multi-resolution data-structures and algorithms are key in Visualization to achieve real-time interaction with large data-sets. Research has been primarily focused on the off-line construction of such representations mostly using decimation schemes. Drawbacks of this class of approaches include: (i) the inability to maintain interactivity when the displayed surface changes frequently, (ii) inability to control the global geometry of the embedding (no self-intersections) of any approximated level of detail of the output surface. In this paper we introduce a technique for on-line construction and smoothing of progressive isosurfaces. Our hybrid approach combines the flexibility of a progressive multi-resolution representation with the advantages of a recursive sub-division scheme. Our main contributions are: (i) a progressive algorithm that builds a multi-resolution surface by successive refinements so that a coarse representation of the output is generated as soon as a coarse representation of the input is provided, (ii) application of the same scheme to smooth the surface by means of a 3D recursive subdivision rule, (iii) a multi-resolution representation where any adaptively selected level of detail surface is guaranteed to be free of self-intersections.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Entitlements exemptions for new refiners
Not Available
1980-02-29
The practice of exempting start-up inventories from entitlement requirements for new refiners has been called into question by the Office of Hearings and Appeals and other responsible Departmental officials. ERA with the assistance of the Office of General Counsel considering resolving the matter through rulemaking; however, by October 26, 1979 no rulemaking had been published. Because of the absence of published standards for use in granting these entitlements to new refineries, undue reliance was placed on individual judgements that could result in inequities to applicants and increase the potential for fraud and abuse. Recommendations are given as follows: (1) if the program for granting entitlements exemptions to new refiners is continued, the Administrator, ERA should promptly take action to adopt an appropriate regulation to formalize the program by establishing standards and controls that will assure consistent and equitable application; in addition, files containing adjustments given to new refiners should be made complete to support benefits already allowed; and (2) whether the program is continued or discontinued, the General Counsel and the Administrator, ERA, should coordiate on how to evaluate the propriety of inventory adjustments previously granted to new refineries.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
A Refined Cauchy-Schwarz Inequality
ERIC Educational Resources Information Center
Mercer, Peter R.
2007-01-01
The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.
Fast transport simulation with an adaptive grid refinement.
Haefner, Frieder; Boy, Siegrun
2003-01-01
One of the main difficulties in transport modeling and calibration is the extraordinarily long computing times necessary for simulation runs. Improved execution time is a prerequisite for calibration in transport modeling. In this paper we investigate the problem of code acceleration using an adaptive grid refinement, neglecting subdomains, and devising a method by which the Courant condition can be ignored while maintaining accurate solutions. Grid refinement is based on dividing selected cells into regular subcells and including the balance equations of subcells in the equation system. The connection of coarse and refined cells satisfies the mass balance with an interpolation scheme that is implicitly included in the equation system. The refined subdomain can move with the average transport velocity of the subdomain. Very small time steps are required on a fine or a refined grid, because of the combined effect of the Courant and Peclet conditions. Therefore, we have developed a special upwind technique in small grid cells with high velocities (velocity suppression). We have neglected grid subdomains with very small concentration gradients (zero suppression). The resulting software, MODCALIF, is a three-dimensional, modularly constructed FORTRAN code. For convenience, the package names used by the well-known MODFLOW and MT3D computer programs are adopted, and the same input file structure and format is used, but the program presented here is separate and independent. Also, MODCALIF includes algorithms for variable density modeling and model calibration. The method is tested by comparison with an analytical solution, and illustrated by means of a two-dimensional theoretical example and three-dimensional simulations of the variable-density Cape Cod and SALTPOOL experiments. Crossing from fine to coarse grid produces numerical dispersion when the whole subdomain of interest is refined; however, we show that accurate solutions can be obtained using a fraction of the
Vortex-dominated conical-flow computations using unstructured adaptively-refined meshes
NASA Technical Reports Server (NTRS)
Batina, John T.
1989-01-01
A conical Euler/Navier-Stokes algorithm is presented for the computation of vortex-dominated flows. The flow solver involves a multistage Runge-Kutta time stepping scheme which uses a finite-volume spatial discretization on an unstructured grid made up of triangles. The algorithm also employs an adaptive mesh refinement procedure which enriches the mesh locally to more accurately resolve the vortical flow features. Results are presented for several highly-swept delta wing and circular cone cases at high angles of attack and at supersonic freestream flow conditions. Accurate solutions were obtained more efficiently when adaptive mesh refinement was used in contrast with refining the grid globally. The paper presents descriptions of the conical Euler/Navier-Stokes flow solver and adaptive mesh refinement procedures along with results which demonstrate the capability.
Firing of pulverized solvent refined coal
Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.
1986-01-01
An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.
Grain Refinement of Deoxidized Copper
NASA Astrophysics Data System (ADS)
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-08-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor (Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
Solvent refined coal (SRC) process
Not Available
1980-12-01
This report summarizes the progress of the Solvent Refined Coal (SRC) project by The Pittsburg and Midway Coal Mining Co. at the SRC Pilot Plant in Fort Lewis, Washington and the Gulf Science and Technology Company Process Development Unit (P-99) in Harmarville, Pennsylvania, for the Department of Energy during the month of October, 1980. The Fort Lewis Pilot Plant was shut down the entire month of October, 1980 for inspection and maintenance. PDU P-99 completed two runs during October investigating potential start-up modes for the Demonstration Plant.
Winter, V.L.; Berg, R.S.; Dalton, L.J.
1998-06-01
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Khokhlov, A. M.
1997-01-01
A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.
Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N
2004-04-15
We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner Provisions § 80.1340 How does a refiner obtain approval as a small refiner? (a) Applications for...
Introducing robustness to maximum-likelihood refinement of electron-microsopy data
Scheres, Sjors H. W. Carazo, José-María
2009-07-01
An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.
Zone refining of plutonium metal
1997-05-01
The purpose of this study was to investigate zone refining techniques for the purification of plutonium metal. The redistribution of 10 impurity elements from zone melting was examined. Four tantalum boats were loaded with plutonium impurity alloy, placed in a vacuum furnace, heated to 700{degrees}C, and held at temperature for one hour. Ten passes were made with each boat. Metallographic and chemical analyses performed on the plutonium rods showed that, after 10 passes, moderate movement of certain elements were achieved. Molten zone speeds of 1 or 2 inches per hour had no effect on impurity element movement. Likewise, the application of constant or variable power had no effect on impurity movement. The study implies that development of a zone refining process to purify plutonium is feasible. Development of a process will be hampered by two factors: (1) the effect on impurity element redistribution of the oxide layer formed on the exposed surface of the material is not understood, and (2) the tantalum container material is not inert in the presence of plutonium. Cold boat studies are planned, with higher temperature and vacuum levels, to determine the effect on these factors. 5 refs., 1 tab., 5 figs.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Materials refining on the Moon
NASA Astrophysics Data System (ADS)
Landis, Geoffrey A.
2007-05-01
Oxygen, metals, silicon, and glass are raw materials that will be required for long-term habitation and production of structural materials and solar arrays on the Moon. A process sequence is proposed for refining these materials from lunar regolith, consisting of separating the required materials from lunar rock with fluorine. The fluorine is brought to the Moon in the form of potassium fluoride, and is liberated from the salt by electrolysis in a eutectic salt melt. Tetrafluorosilane produced by this process is reduced to silicon by a plasma reduction stage; the fluorine salts are reduced to metals by reaction with metallic potassium. Fluorine is recovered from residual MgF and CaF2 by reaction with K2O.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Silicon refinement by chemical vapor transport
NASA Technical Reports Server (NTRS)
Olson, J.
1984-01-01
Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Refining the shallow slip deficit
NASA Astrophysics Data System (ADS)
Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois
2016-03-01
Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Pneumatic conveying of pulverized solvent refined coal
Lennon, Dennis R.
1984-11-06
A method for pneumatically conveying solvent refined coal to a burner under conditions of dilute phase pneumatic flow so as to prevent saltation of the solvent refined coal in the transport line by maintaining the transport fluid velocity above approximately 95 ft/sec.
Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.
2014-09-01
SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-01-01
A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ1 and χ2 dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ1 and χ2 values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved. PMID:24699648
Improving Flow Response of a Variable-rate Aerial Application System by Interactive Refinement
Technology Transfer Automated Retrieval System (TEKTRAN)
Experiments were conducted to evaluate response of a variable-rate aerial application controller to changing flow rates and to improve its response at correspondingly varying system pressures. System improvements have been made by refinement of the control algorithms over time in collaboration with ...
North Dakota Refining Capacity Study
Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca
2011-01-05
According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.
Refinement of boards' role required.
Umbdenstock, R J
1987-01-01
The governing board's role in health care is not changing, but new competitive forces necessitate a refinement of the board's approach to fulfilling its role. In a free-standing, community, not-for-profit hospital, the board functions as though it were the "owner." Although it does not truly own the facility in the legal sense, the board does have legal, fiduciary, and financial responsibilities conferred on it by the state. In a religious-sponsored facility, the board fulfills these same obligations on behalf of the sponsoring institute, subject to the institute's reserved powers. In multi-institutional systems, the hospital board's power and authority depend on the role granted it by the system. Boards in all types of facilities are currently faced with the following challenges: Fulfilling their basic responsibilities, such as legal requirements, financial duties, and obligations for the quality of care. Encouraging management and the board itself to "think strategically" in attacking new competitive market forces while protecting the organization's traditional mission and values. Assessing recommended strategies in light of consequences if constituencies think the organization is abandoning its commitments. Boards can take several steps to match their mode of operation with the challenges of the new environment. Boards must rededicate themselves to the hospital's mission. Trustees must expand their understanding of health care trends and issues and their effect on the organization. Boards must evaluate and help strengthen management's performance, rather than acting as a "watchdog" in an adversarial position. Boards must think strategically, rather than focusing solely on operational details. Boards must evaluate the methods they use for conducting business. PMID:10280356
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Firing of pulverized solvent refined coal
Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.
1990-05-15
A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.
Strategies for hp-adaptive Refinement
Mitchell, William F.
2008-09-01
In the hp-adaptive version of the finite element method for solving partial differential equations, the grid is adaptively refined in both h, the size of the elements, and p, the degree of the piecewise polynomial approximation over the element. The selection of which elements to refine is determined by a local a posteriori error indicator, and is well established. But the determination of whether the element should be refined by h or p is still open. In this paper, we describe several strategies that have been proposed for making this determination. A numerical example to illustrate the effectiveness of these strategies will be presented.
Refining of metallurgical-grade silicon
NASA Technical Reports Server (NTRS)
Dietl, J.
1986-01-01
A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.
A Selective Refinement Approach for Computing the Distance Functions of Curves
Laney, D A; Duchaineau, M A; Max, N L
2000-12-01
We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics
Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506
Refined Phenotyping of Modic Changes
Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino
2016-01-01
. The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491
Adaptive mesh refinement with spectral accuracy for magnetohydrodynamics in two space dimensions
NASA Astrophysics Data System (ADS)
Rosenberg, D.; Pouquet, A.; Mininni, P. D.
2007-08-01
We examine the effect of accuracy of high-order spectral element methods, with or without adaptive mesh refinement (AMR), in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang (OT) vortex made up of a magnetic X-point centred on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsässer formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere in the fluid context (Rosenberg et al 2006 J. Comput. Phys. 215 59-80) the code allows both statically refined and dynamically refined grids. Tests of the algorithm using analytic solutions are described, and comparisons of the OT solutions with pseudo-spectral computations are performed. We demonstrate for moderate Reynolds numbers that the algorithms using both static and refined grids reproduce the pseudo-spectral solutions quite well. We show that low-order truncation—even with a comparable number of global degrees of freedom—fails to correctly model some strong (sup-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.
U.S. Refining Capacity Utilization
1995-01-01
This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.
1991 worldwide refining and gas processing directory
Not Available
1990-01-01
This book ia an authority for immediate information on the industry. You can use it to find new business, analyze market trends, and to stay in touch with existing contacts while making new ones. The possibilities for business applications are numerous. Arranged by country, all listings in the directory include address, phone, fax and telex numbers, a description of the company's activities, names of key personnel and their titles, corporate headquarters, branch offices and plant sites. This newly revised edition lists more than 2000 companies and nearly 3000 branch offices and plant locations. This east-to-use reference also includes several of the most vital and informative surveys of the industry, including the U.S. Refining Survey, the Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Wax Capacities Report from the National Petroleum Refiner's Association.
Refiners to the front: Unsung heroes revisited
Not Available
1989-09-29
Crude-oil purchasing and finished-product selling can be linked to a constant volley, with two potentially deadly pricing games going on simultaneously. Nothing new to refiners, who are often viewed by those upstream and downstream of them as a necessary evil mid-point between the wellhead and the retail pump. Recent comparative stability in the margins refiners achieve on a barrel of crude oil, however, confers good things to producers and product marketers. This issue editorializes against taking refiners for granted. This issue also presents the following: (a) ED refining netback data series for the US Gulf and West Coasts Rotterdam, and Singapore as of September 22, 1989; and (b) ED fuel price/tax series for countries of the Eastern Hemisphere, September 1989 edition. 6 fig., 5 tabs.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
Structure refinement from precession electron diffraction data.
Palatinus, Lukáš; Jacob, Damien; Cuvillier, Priscille; Klementová, Mariana; Sinkler, Wharton; Marks, Laurence D
2013-03-01
Electron diffraction is a unique tool for analysing the crystal structures of very small crystals. In particular, precession electron diffraction has been shown to be a useful method for ab initio structure solution. In this work it is demonstrated that precession electron diffraction data can also be successfully used for structure refinement, if the dynamical theory of diffraction is used for the calculation of diffracted intensities. The method is demonstrated on data from three materials - silicon, orthopyroxene (Mg,Fe)(2)Si(2)O(6) and gallium-indium tin oxide (Ga,In)(4)Sn(2)O(10). In particular, it is shown that atomic occupancies of mixed crystallographic sites can be refined to an accuracy approaching X-ray or neutron diffraction methods. In comparison with conventional electron diffraction data, the refinement against precession diffraction data yields significantly lower figures of merit, higher accuracy of refined parameters, much broader radii of convergence, especially for the thickness and orientation of the sample, and significantly reduced correlations between the structure parameters. The full dynamical refinement is compared with refinement using kinematical and two-beam approximations, and is shown to be superior to the latter two. PMID:23403968
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.
Refining a relativistic, hydrodynamic solver: Admitting ultra-relativistic flows
NASA Astrophysics Data System (ADS)
Bernstein, J. P.; Hughes, P. A.
2009-09-01
We have undertaken the simulation of hydrodynamic flows with bulk Lorentz factors in the range 102-106. We discuss the application of an existing relativistic, hydrodynamic primitive variable recovery algorithm to a study of pulsar winds, and, in particular, the refinement made to admit such ultra-relativistic flows. We show that an iterative quartic root finder breaks down for Lorentz factors above 102 and employ an analytic root finder as a solution. We find that the former, which is known to be robust for Lorentz factors up to at least 50, offers a 24% speed advantage. We demonstrate the existence of a simple diagnostic allowing for a hybrid primitives recovery algorithm that includes an automatic, real-time toggle between the iterative and analytical methods. We further determine the accuracy of the iterative and hybrid algorithms for a comprehensive selection of input parameters and demonstrate the latter’s capability to elucidate the internal structure of ultra-relativistic plasmas. In particular, we discuss simulations showing that the interaction of a light, ultra-relativistic pulsar wind with a slow, dense ambient medium can give rise to asymmetry reminiscent of the Guitar nebula leading to the formation of a relativistic backflow harboring a series of internal shockwaves. The shockwaves provide thermalized energy that is available for the continued inflation of the PWN bubble. In turn, the bubble enhances the asymmetry, thereby providing positive feedback to the backflow.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2002-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called "structured" (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2003-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Zeolites as catalysts in oil refining.
Primo, Ana; Garcia, Hermenegildo
2014-11-21
Oil is nowadays the main energy source and this prevalent position most probably will continue in the next decades. This situation is largely due to the degree of maturity that has been achieved in oil refining and petrochemistry as a consequence of the large effort in research and innovation. The remarkable efficiency of oil refining is largely based on the use of zeolites as catalysts. The use of zeolites as catalysts in refining and petrochemistry has been considered as one of the major accomplishments in the chemistry of the XXth century. In this tutorial review, the introductory part describes the main features of zeolites in connection with their use as solid acids. The main body of the review describes important refining processes in which zeolites are used including light naphtha isomerization, olefin alkylation, reforming, cracking and hydrocracking. The final section contains our view on future developments in the field such as the increase in the quality of the transportation fuels and the coprocessing of increasing percentage of biofuels together with oil streams. This review is intended to provide the rudiments of zeolite science applied to refining catalysis. PMID:24671148
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
A Cartesian grid approach with hierarchical refinement for compressible flows
NASA Technical Reports Server (NTRS)
Quirk, James J.
1994-01-01
Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.
On-Orbit Model Refinement for Controller Redesign
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
1998-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner obtain approval as... refiner status must be submitted to EPA by December 31, 2007. (b) For U.S. Postal delivery, applications...), for the period January 1, 2005 through December 31, 2005. (ii) The information submitted to EIA...
Minimally refined biomass fuels: an economic shortcut
Pearson, R.K.; Hirschfeld, T.B.
1980-07-01
An economic shortcut can be realized if the sugars from which ethanol is made are utilized directly as concentrated aqueous solutions for fuels rather than by further refining them through fermentation and distillation steps. Simple evaporation of carbohydrate solutions from sugar cane or sweet sorghum, or from hydrolysis of starch or cellulose content of many plants yield potential liquid fuels of energy contents (on a volume basis) comparable to highly refined liquid fuels like methanol and ethanol. The potential utilization of such minimally refined biomass derived fuels is discussed and the burning of sucrose-ethanol-water solutions in a small modified domestic burner is demonstrated. Other potential uses of sugar solutions or emulsion and microemulsions in fuel oils for use in diesel or turbine engines are proposed and discussed.
Terahertz spectroscopy for quantifying refined oil mixtures.
Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui
2012-08-20
In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017
Quantum algebraic approach to refined topological vertex
NASA Astrophysics Data System (ADS)
Awata, H.; Feigin, B.; Shiraishi, J.
2012-03-01
We establish the equivalence between the refined topological vertex of Iqbal-Kozcaz-Vafa and a certain representation theory of the quantum algebra of type W 1+∞ introduced by Miki. Our construction involves trivalent intertwining operators Φ and Φ* associated with triples of the bosonic Fock modules. Resembling the topological vertex, a triple of vectors ∈ {mathbb{Z}^2} is attached to each intertwining operator, which satisfy the Calabi-Yau and smoothness conditions. It is shown that certain matrix elements of Φ and Φ* give the refined topological vertex C λ μν ( t, q) of Iqbal-Kozcaz-Vafa. With another choice of basis, we recover the refined topological vertex C λ μ ν ( q, t) of Awata-Kanno. The gluing factors appears correctly when we consider any compositions of Φ and Φ*. The spectral parameters attached to Fock spaces play the role of the Kähler parameters.
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
Increasing levels of assistance in refinement of knowledge-based retrieval systems
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Kedar, Smadar; Pell, Barney
1994-01-01
The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2015-06-09
A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
Image segmentation by background extraction refinements
NASA Technical Reports Server (NTRS)
Rodriguez, Arturo A.; Mitchell, O. Robert
1990-01-01
An image segmentation method refining background extraction in two phases is presented. In the first phase, the method detects homogeneous-background blocks and estimates the local background to be extracted throughout the image. A block is classified homogeneous if its left and right standard deviations are small. The second phase of the method refines background extraction in nonhomogeneous blocks by recomputing the shoulder thresholds. Rules that predict the final background extraction are derived by observing the behavior of successive background statistical measurements in the regions under the presence of dark and/or bright object pixels. Good results are shown for a number of outdoor scenes.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
CORDIC Algorithms: Theory And Extensions
NASA Astrophysics Data System (ADS)
Delosme, Jean-Marc
1989-11-01
Optimum algorithms for signal processing are notoriously costly to implement since they usually require intensive linear algebra operations to be performed at very high rates. In these cases a cost-effective solution is to design a pipelined or parallel architecture with special-purpose VLSI processors. One may often lower the hardware cost of such a dedicated architecture by using processors that implement CORDIC-like arithmetic algorithms. Indeed, with CORDIC algorithms, the evaluation and the application of an operation, such as determining a rotation that brings a vector onto another one and rotating other vectors by that amount, require the same time on identical processors and can be fully overlapped in most cases, thus leading to highly efficient implementations. We have shown earlier that a necessary condition for a CORDIC-type algorithm to exist is that the function to be implemented can be represented in terms of a matrix exponential. This paper refines this condition to the ability to represent , the desired function in terms of a rational representation of a matrix exponential. This insight gives us a powerful tool for the design of new CORDIC algorithms. This is demonstrated by rederiving classical CORDIC algorithms and introducing several new ones, for Jacobi rotations, three and higher dimensional rotations, etc.
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.551 How does a refiner obtain approval as a small refiner under this subpart?...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Small Refiner Hardship Provisions § 80.551 How does a refiner obtain approval as a small refiner under this subpart?...
Robust Refinement as Implemented in TOPAS
Stone, K.; Stephens, P
2010-01-01
A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY LIQUORS FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Shellac (refined). 21.127 Section 21.127 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO TAX AND TRADE BUREAU, DEPARTMENT OF THE TREASURY ALCOHOL FORMULAS FOR DENATURED ALCOHOL AND RUM Specifications for Denaturants §...
Gravitational Collapse With Distributed Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Liebling, Steven; Lehner, Luis; Motl, Patrick; Neilsen, David; Rahman, Tanvir; Reula, Oscar
2006-04-01
Gravitational collapse is studied using distributed adaptive mesh refinement (AMR). The AMR infrastructure includes a novel treatment of adaptive boundaries which allows for high orders of accuracy. Results of the collapse of Brill waves to black holes are presented. Combining both vertex centered and cell centered fields in the same evolution is discussed.
Refiners respond to strategic driving forces
Gonzalez, R.G.
1996-05-01
Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.
Energy Bandwidth for Petroleum Refining Processes
none,
2006-10-01
The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.
Laser furnace technology for zone refining
NASA Technical Reports Server (NTRS)
Griner, D. B.
1984-01-01
A carbon dioxide laser experiment facility is constructed to investigate the problems in using a laser beam to zone refine semiconductor and metal crystals. The hardware includes a computer to control scan mirrors and stepper motors to provide a variety of melt zone patterns. The equipment and its operating procedures are described.
Extended query refinement for medical image retrieval.
Deserno, Thomas M; Güld, Mark O; Plodowski, Bartosz; Spitzer, Klaus; Wein, Berthold B; Schubert, Henning; Ney, Hermann; Seidl, Thomas
2008-09-01
The impact of image pattern recognition on accessing large databases of medical images has recently been explored, and content-based image retrieval (CBIR) in medical applications (IRMA) is researched. At the present, however, the impact of image retrieval on diagnosis is limited, and practical applications are scarce. One reason is the lack of suitable mechanisms for query refinement, in particular, the ability to (1) restore previous session states, (2) combine individual queries by Boolean operators, and (3) provide continuous-valued query refinement. This paper presents a powerful user interface for CBIR that provides all three mechanisms for extended query refinement. The various mechanisms of man-machine interaction during a retrieval session are grouped into four classes: (1) output modules, (2) parameter modules, (3) transaction modules, and (4) process modules, all of which are controlled by a detailed query logging. The query logging is linked to a relational database. Nested loops for interaction provide a maximum of flexibility within a minimum of complexity, as the entire data flow is still controlled within a single Web page. Our approach is implemented to support various modalities, orientations, and body regions using global features that model gray scale, texture, structure, and global shape characteristics. The resulting extended query refinement has a significant impact for medical CBIR applications. PMID:17497197
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
A parallel algorithm for mesh smoothing
Freitag, L.; Jones, M.; Plassmann, P.
1999-07-01
Maintaining good mesh quality during the generation and refinement of unstructured meshes in finite-element applications is an important aspect in obtaining accurate discretizations and well-conditioned linear systems. In this article, the authors present a mesh-smoothing algorithm based on nonsmooth optimization techniques and a scalable implementation of this algorithm. They prove that the parallel algorithm has a provably fast runtime bound and executes correctly for a parallel random access machine (PRAM) computational model. They extend the PRAM algorithm to distributed memory computers and report results for two-and three-dimensional simplicial meshes that demonstrate the efficiency and scalability of this approach for a number of different test cases. They also examine the effect of different architectures on the parallel algorithm and present results for the IBM SP supercomputer and an ATM-connected network of SPARC Ultras.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2014-11-25
This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.
Dinosaurs can fly -- High performance refining
Treat, J.E.
1995-09-01
High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.
Research Burnout: a refined multidimensional scale.
Singh, Surendra N; Dalal, Nikunj; Mishra, Sanjay
2004-12-01
In a prevailing academic climate where there are high expectations for faculty to publish and generate grants, the exploration of Research Burnout among higher education faculty has become increasingly important. Unfortunately, it is a topic that has not been well researched empirically. In 1997 Singh and Bush developed a unidimensional scale to measure Research Burnout. A closer inspection of the definition of this construct and the composition of its items suggests, however, that the construct may be multidimensional and analogous to Maslach's Psychological Burnout Scale. In this paper, we propose a refined, multidimensional Research Burnout scale and test its factorial validity using confirmatory factor analysis. The nomological validity of this refined scale is established by examining hypothesized relationships between Research Burnout and other constructs such as Intrinsic Motivation for doing research, Extrinsic Pressures to do research, and Knowledge Obsolescence. PMID:15762409
Substance abuse in the refining industry
Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )
1989-01-01
In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.
Structured Adaptive Mesh Refinement Application Infrastructure
2010-07-15
SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Arctic Storms in a Regionally Refined Atmospheric General Circulation Model
NASA Astrophysics Data System (ADS)
Roesler, E. L.; Taylor, M.; Boslough, M.; Sullivan, S.
2014-12-01
Regional refinement in an atmospheric general circulation model is a new tool in atmospheric modeling. A regional high-resolution solution can be obtained without the computational cost of running a global high-resolution simulation as global climate models have increasing ability to resolve smaller spatial scales. Previous work has shown high-resolution simulations, i.e. 1/8 degree, and variable resolution utilities have resolved more fine-scale structure and mesoscale storms in the atmosphere than their low-resolution counterparts. We will describe an experiment designed to identify and study Arctic storms at two model resolutions. We used the Community Atmosphere Model, version 5, with the Spectral Element dynamical core at 1/8-degree and 1 degree horizontal resolutions to simulate the climatological year of 1850. Storms were detected using a low-pressure minima and vorticity maxima - finding algorithm. It was found the high-resolution 1/8-degree simulation had more storms in the Northern Hemisphere than the low-resolution 1-degree simulation. A variable resolution simulation with a global low resolution of 1-degree and a high-resolution refined region of 1/8 degree over a region in the Arctic is planned. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2014-16460A
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Refining a triangulation of a planar straight-line graph to eliminate large angles
Mitchell, S.A.
1993-05-13
Triangulations without large angles have a number of applications in numerical analysis and computer graphics. In particular, the convergence of a finite element calculation depends on the largest angle of the triangulation. Also, the running time of a finite element calculation is dependent on the triangulation size, so having a triangulation with few Steiner points is also important. Bern, Dobkin and Eppstein pose as an open problem the existence of an algorithm to triangulate a planar straight-line graph (PSLG) without large angles using a polynomial number of Steiner points. We solve this problem by showing that any PSLG with {upsilon} vertices can be triangulated with no angle larger than 7{pi}/8 by adding O({upsilon}{sup 2}log {upsilon}) Steiner points in O({upsilon}{sup 2} log{sup 2} {upsilon}) time. We first triangulate the PSLG with an arbitrary constrained triangulation and then refine that triangulation by adding additional vertices and edges. Some PSLGs require {Omega}({upsilon}{sup 2}) Steiner points in any triangulation achieving any largest angle bound less than {pi}. Hence the number of Steiner points added by our algorithm is within a log {upsilon} factor of worst case optimal. We note that our refinement algorithm works on arbitrary triangulations: Given any triangulation, we show how to refine it so that no angle is larger than 7{pi}/8. Our construction adds O(nm+nplog m) vertices and runs in time O(nm+nplog m) log(m+ p)), where n is the number of edges, m is one plus the number of obtuse angles, and p is one plus the number of holes and interior vertices in the original triangulation. A previously considered problem is refining a constrained triangulation of a simple polygon, where p = 1. For this problem we add O({upsilon}{sup 2}) Steiner points, which is within a constant factor of worst case optimal.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Grain Refinement of Permanent Mold Cast Copper Base Alloys
M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo
2005-04-01
Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.
California refining in balance as Phase 2 deadline draws near
Adler, K.
1996-01-01
The impact of California`s 1996 RFG program on US markets and its implications for refiners worldwide is analyzed. The preparations in the last few months before refiners must produce California Phase 2 RFG are addressed. Subsequent articles will consider the process improvements made by refiners, the early implementation of the program, and what has been learned about refining, gasoline distribution, environmental benefits and consumer acceptance that can be replicated around the world.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Refining and defining the Program Dependence Web
Campbell, P.L. ); Krishna, K.; Ballance, R.A. . Dept. of Computer Science)
1993-05-01
The Program Dependence Web (PDW) is an intermediate representation for a computer program, which can be interpreted under control-driven, data-driven or demand-driven disciplines. This document completes the definition for the PDW. This includes operational definitions for the nodes and arcs and a description of how PDWs are interpreted. The general structure for conditionals and loops is shown, accompanied by examples. The definition provided here is a refinement of the original one: a new node, the [beta] node,'' replaces the [mu] node, and the [eta][sup [Tau
Refining and defining the Program Dependence Web
Campbell, P.L.; Krishna, K.; Ballance, R.A.
1993-05-01
The Program Dependence Web (PDW) is an intermediate representation for a computer program, which can be interpreted under control-driven, data-driven or demand-driven disciplines. This document completes the definition for the PDW. This includes operational definitions for the nodes and arcs and a description of how PDWs are interpreted. The general structure for conditionals and loops is shown, accompanied by examples. The definition provided here is a refinement of the original one: a new node, the ``{beta} node,`` replaces the {mu} node, and the {eta}{sup {Tau}} node is eliminated.
Adaptive refinement tools for tetrahedral unstructured grids
NASA Technical Reports Server (NTRS)
Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)
2011-01-01
An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.
Refinement Of Hexahedral Cells In Euler Flow Computations
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1996-01-01
Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.
Empirical Analysis and Refinement of Expert System Knowledge Bases
Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen
1986-01-01
Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.
A refined orbit for the satellite of asteroid (107) Camilla
NASA Astrophysics Data System (ADS)
Pajuelo, Myriam Virginia; Carry, Benoit; Vachier, Frederic; Berthier, Jerome; Descamp, Pascal; Merline, William J.; Tamblyn, Peter M.; Conrad, Al; Storrs, Alex; Margot, Jean-Luc; Marchis, Frank; Kervella, Pierre; Girard, Julien H.
2015-11-01
The satellite of the Cybele asteroid (107) Camilla was discovered in March 2001 using the Hubble Space Telescope (Storrs et al., 2001, IAUC 7599). From a set of 23 positions derived from adaptive optics observations obtained over three years with the ESO VLT, Keck-II and Gemini-North telescopes, Marchis et al. (2008, Icarus 196) determined its orbit to be nearly circular.In the new work reported here, we compiled, reduced, and analyzed observations at 39 epochs (including the 23 positions previously analyzed) by adding additional observations taken from data archives: HST in 2001; Keck in 2002, 2003, and 2009; Gemini in 2010; and VLT in 2011. The present dataset hence contains twice as many epochs as the prior analysis and covers a time span that is three times longer (more than a decade).We use our orbit determination algorithm Genoid (GENetic Orbit IDentification), a genetic based algorithm that relies on a metaheuristic method and a dynamical model of the Solar System (Vachier et al., 2012, A&A 543). The method uses two models: a simple Keplerian model to minimize the search-time for an orbital solution, exploring a wide space of solutions; and a full N-body problem that includes the gravitational field of the primary asteroid up to 4th order.The orbit we derive fits all 39 observed positions of the satellite with an RMS residual of only milli-arcseconds, which corresponds to sub-pixel accuracy. We found the orbit of the satellite to be circular and roughly aligned with the equatorial plane of Camilla. The refined mass of the system is (12 ± 1) x 10^18 kg, for an orbital period of 3.71 days.We will present this improved orbital solution of the satellite of Camilla, as well as predictions for upcoming stellar occultation events.
Deformable elastic network refinement for low-resolution macromolecular crystallography
Schröder, Gunnar F.; Levitt, Michael; Brunger, Axel T.
2014-09-01
An overview of applications of the deformable elastic network (DEN) refinement method is presented together with recommendations for its optimal usage. Crystals of membrane proteins and protein complexes often diffract to low resolution owing to their intrinsic molecular flexibility, heterogeneity or the mosaic spread of micro-domains. At low resolution, the building and refinement of atomic models is a more challenging task. The deformable elastic network (DEN) refinement method developed previously has been instrumental in the determinion of several structures at low resolution. Here, DEN refinement is reviewed, recommendations for its optimal usage are provided and its limitations are discussed. Representative examples of the application of DEN refinement to challenging cases of refinement at low resolution are presented. These cases include soluble as well as membrane proteins determined at limiting resolutions ranging from 3 to 7 Å. Potential extensions of the DEN refinement technique and future perspectives for the interpretation of low-resolution crystal structures are also discussed.
Rapid Glass Refiner Development Program, Final report
1995-02-20
A rapid glass refiner (RGR) technology which could be applied to both conventional and advanced class melting systems would significantly enhance the productivity and the competitiveness of the glass industry in the United States. Therefore, Vortec Corporation, with the support of the US Department of Energy (US DOE) under Cooperative Agreement No. DE-FC07-90ID12911, conducted a research and development program for a unique and innovative approach to rapid glass refining. To provide focus for this research effort, container glass was the primary target from among the principal glass types based on its market size and potential for significant energy savings. Container glass products represent the largest segment of the total glass industry accounting for 60% of the tonnage produced and over 40% of the annual energy consumption of 232 trillion Btu/yr. Projections of energy consumption and the market penetration of advanced melting and fining into the container glass industry yield a potential energy savings of 7.9 trillion Btu/yr by the year 2020.
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement
Anninos, P; Fragile, P C; Salmonson, J D
2005-05-06
A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams
Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.
2003-11-04
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.
Hornung, R.D.
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
Global path planning of mobile robots using a memetic algorithm
NASA Astrophysics Data System (ADS)
Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen
2015-08-01
In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.
Proving refinement transformations for deriving high-assurance software
Winter, V.L.; Boyle, J.M.
1996-05-01
The construction of a high-assurance system requires some evidence, ideally a proof, that the system as implemented will behave as required. Direct proofs of implementations do not scale up well as systems become more complex and therefore are of limited value. In recent years, refinement-based approaches have been investigated as a means to manage the complexity inherent in the verification process. In a refinement-based approach, a high-level specification is converted into an implementation through a number of refinement steps. The hope is that the proofs of the individual refinement steps will be easier than a direct proof of the implementation. However, if stepwise refinement is performed manually, the number of steps is severely limited, implying that the size of each step is large. If refinement steps are large, then proofs of their correctness will not be much easier than a direct proof of the implementation. The authors describe an approach to refinement-based software development that is based on automatic application of refinements, expressed as program transformations. This automation has the desirable effect that the refinement steps can be extremely small and, thus, easy to prove correct. They give an overview of the TAMPR transformation system that the use for automated refinement. They then focus on some aspects of the semantic framework that they have been developing to enable proofs that TAMPR transformations are correctness preserving. With this framework, proofs of correctness for transformations can be obtained with the assistance of an automated reasoning system.
Level 5: user refinement to aid the fusion process
NASA Astrophysics Data System (ADS)
Blasch, Erik P.; Plano, Susan
2003-04-01
The revised JDL Fusion model Level 4 process refinement covers a broad spectrum of actions such as sensor management and control. A limitation of Level 4 is the
The evolution and refinements of varicocele surgery
Marmar, Joel L
2016-01-01
Varicoceles had been recognized in clinical practice for over a century. Originally, these procedures were utilized for the management of pain but, since 1952, the repairs had been mostly for the treatment of male infertility. However, the diagnosis and treatment of varicoceles were controversial, because the pathophysiology was not clear, the entry criteria of the studies varied among centers, and there were few randomized clinical trials. Nevertheless, clinicians continued developing techniques for the correction of varicoceles, basic scientists continued investigations on the pathophysiology of varicoceles, and new outcome data from prospective randomized trials have appeared in the world's literature. Therefore, this special edition of the Asian Journal of Andrology was proposed to report much of the new information related to varicoceles and, as a specific part of this project, the present article was developed as a comprehensive review of the evolution and refinements of the corrective procedures. PMID:26732111
Formal language theory: refining the Chomsky hierarchy
Jäger, Gerhard; Rogers, James
2012-01-01
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632
Adaptive Mesh Refinement Simulations of Relativistic Binaries
NASA Astrophysics Data System (ADS)
Motl, Patrick M.; Anderson, M.; Lehner, L.; Olabarrieta, I.; Tohline, J. E.; Liebling, S. L.; Rahman, T.; Hirschman, E.; Neilsen, D.
2006-09-01
We present recent results from our efforts to evolve relativistic binaries composed of compact objects. We simultaneously solve the general relativistic hydrodynamics equations to evolve the material components of the binary and Einstein's equations to evolve the space-time. These two codes are coupled through an adaptive mesh refinement driver (had). One of the ultimate goals of this project is to address the merger of a neutron star and black hole and assess the possible observational signature of such systems as gamma ray bursts. This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311 and in part through NASA's ATP program grant NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043 and at LSU's Center for Computation & Technology.
Visualization Tools for Adaptive Mesh Refinement Data
Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-05-09
Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Visualization of Scalar Adaptive Mesh Refinement Data
VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-12-06
Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
GRChombo: Numerical relativity with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
GC-directed control improves refining
Hail, G.F. )
1991-02-01
The increasing role of refinery product quality control is significant. Driven not only for meeting product specification and economic goals, refiners must also satisfy new purchaser demands. That is, the emphasis to monitor product quality on-line in an accurate, timely manner is greater now than ever, due largely to the expanding use of statistical methods (SQC/SPC) in analyzing and manipulating process operation. Consequently, the need for reliable composition control is essential in maintaining refinery prosperity. Process gas chromatographs are frequently used to monitor the performance of distillation, absorption and stripping towers by providing near-real-time stream composition, particular component concentration, or calculated parameter (Rvp, Btu content, etc.) information. This paper reports that appreciably greater benefit can be achieved when process gas chromatographs (or GCs) provide on-line feedback data to process control schemes.
Essays on refining markets and environmental policy
NASA Astrophysics Data System (ADS)
Oladunjoye, Olusegun Akintunde
This thesis is comprised of three essays. The first two essays examine empirically the relationship between crude oil price and wholesale gasoline prices in the U.S. petroleum refining industry while the third essay determines the optimal combination of emissions tax and environmental research and development (ER&D) subsidy when firms organize ER&D either competitively or as a research joint venture (RJV). In the first essay, we estimate an error correction model to determine the effects of market structure on the speed of adjustment of wholesale gasoline prices, to crude oil price changes. The results indicate that market structure does not have a strong effect on the dynamics of price adjustment in the three regional markets examined. In the second essay, we allow for inventories to affect the relationship between crude oil and wholesale gasoline prices by allowing them to affect the probability of regime change in a Markov-switching model of the refining margin. We find that low gasoline inventory increases the probability of switching from the low margin regime to the high margin regime and also increases the probability of staying in the high margin regime. This is consistent with the predictions of the competitive storage theory. In the third essay, we extend the Industrial Organization R&D theory to the determination of optimal environmental policies. We find that RJV is socially desirable. In comparison to competitive ER&D, we suggest that regulators should encourage RJV with a lower emissions tax and higher subsidy as these will lead to the coordination of ER&D activities and eliminate duplication of efforts while firms internalize their technological spillover externality.
Implementation of modified SPIHT algorithm for Compression of images
NASA Astrophysics Data System (ADS)
Kurume, A. V.; Yana, D. M.
2011-12-01
We present a throughput-efficient FPGA implementation of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm for compression of images. The SPIHT uses inherent redundancy among wavelet coefficients and suited for both grey and color images. The SPIHT algorithm uses dynamic data structure which hinders hardware realization. we have modified basic SPIHT in two ways, one by using static (fixed) mappings which represent significant information and the other by interchanging the sorting and refinement passes.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
40 CFR 409.30 - Applicability; description of the liquid cane sugar refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... liquid cane sugar refining subcategory. 409.30 Section 409.30 Protection of Environment ENVIRONMENTAL... Cane Sugar Refining Subcategory § 409.30 Applicability; description of the liquid cane sugar refining... cane sugar into liquid refined sugar....
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality with results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
Parallel adaptive mesh refinement for electronic structure calculations
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1996-12-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.
High Speed Photography Of Wood Pulping In A Disc Refiner
NASA Astrophysics Data System (ADS)
Atack, D.; Clayton, D. L.; Quinn, A. E.; Stationwala, M. I.
1985-02-01
Some of the mechanisms involved in the reduction of wood chips to papermaking pulp in a commercial disc refiner have been determined by high speed photography. Flow patterns of pulp through the refiner, including an unexpected recirculation pattern, have been recorded. Cine-photography was also employed to show how wood chips are transported by a ribbon screw feeder into the refiner. Some aspects of photographing in a hostile environment are described. The following salient observations have been made during these studies. Chips and dilution water fall to the base of the feeder housing and are fed along it to the refiner eye, where the chips are reduced to coarse pulp. This coarse pulp proceeds through the breaker bars into the refining zone. Some pulp in the inner part of the refining zone flows back to the breaker bars along grooves of the stationary plates, giving rise to considerable recirculation. Pulp in the outer part of the refining zone moves radially outwards. For a short fraction of its passage through the refiner, most of the fibrous material is constrained to move in the direction of rotation of the moving plates. Some of this material is stapled momentarily in a tangential orientation across the bars of both sets of plates. The immobilized fibres are then subjected to the refining action between the relatively moving bars before being disgorged into the adjacent grooves.
New Process for Grain Refinement of Aluminum. Final Report
Dr. Joseph A. Megy
2000-09-22
A new method of grain refining aluminum involving in-situ formation of boride nuclei in molten aluminum just prior to casting has been developed in the subject DOE program over the last thirty months by a team consisting of JDC, Inc., Alcoa Technical Center, GRAS, Inc., Touchstone Labs, and GKS Engineering Services. The Manufacturing process to make boron trichloride for grain refining is much simpler than preparing conventional grain refiners, with attendant environmental, capital, and energy savings. The manufacture of boride grain refining nuclei using the fy-Gem process avoids clusters, salt and oxide inclusions that cause quality problems in aluminum today.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX.
Janowski, Pawel A; Moriarty, Nigel W; Kelley, Brian P; Case, David A; York, Darrin M; Adams, Paul D; Warren, Gregory L
2016-09-01
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows. PHENIX-AFITT refinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentially difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows. PHENIX-AFITT refinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combining AFITT and the PHENIX software suite on a data set of 189 protein-ligand PDB structures are presented. Refinements using PHENIX-AFITT significantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. For the data presented, PHENIX-AFITT refinements result in more chemically accurate models for small-molecule ligands. PMID:27599738
Refiners react to changes in the pipeline infrastructure
Giles, K.A.
1997-06-01
Petroleum pipelines have long been a critical component in the distribution of crude and refined products in the U.S. Pipelines are typically the most cost efficient mode of transportation for reasonably consistent flow rates. For obvious reasons, inland refineries and consumers are much more dependent on petroleum pipelines to provide supplies of crude and refined products than refineries and consumers located on the coasts. Significant changes in U.S. distribution patterns for crude and refined products are reshaping the pipeline infrastructure and presenting challenges and opportunities for domestic refiners. These changes are discussed.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Nonlinear Global Optimization Using Curdling Algorithm
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
An algorithm for segmenting polarimetric SAR imagery
NASA Astrophysics Data System (ADS)
Geaga, Jorge V.
2015-05-01
We have developed an algorithm for segmenting fully polarimetric single look TerraSAR-X, multilook SIR-C and 7 band Landsat 5 imagery using neural nets. The algorithm uses a feedforward neural net with one hidden layer to segment different surface classes. The weights are refined through an iterative filtering process characteristic of a relaxation process. Features selected from studies of fully polarimetric complex single look TerraSAR-X data and multilook SIR-C data are used as input to the net. The seven bands from Landsat 5 data are used as input for the Landsat neural net. The Cloude-Pottier incoherent decomposition is used to investigate the physical basis of the polarimetric SAR data segmentation. The segmentation of a SIR-C ocean surface scene into four classes is presented. This segmentation algorithm could be a very useful tool for investigating complex polarimetric SAR phenomena.
An efficient parallel algorithm for mesh smoothing
Freitag, L.; Plassmann, P.; Jones, M.
1995-12-31
Automatic mesh generation and adaptive refinement methods have proven to be very successful tools for the efficient solution of complex finite element applications. A problem with these methods is that they can produce poorly shaped elements; such elements are undesirable because they introduce numerical difficulties in the solution process. However, the shape of the elements can be improved through the determination of new geometric locations for mesh vertices by using a mesh smoothing algorithm. In this paper the authors present a new parallel algorithm for mesh smoothing that has a fast parallel runtime both in theory and in practice. The authors present an efficient implementation of the algorithm that uses non-smooth optimization techniques to find the new location of each vertex. Finally, they present experimental results obtained on the IBM SP system demonstrating the efficiency of this approach.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve
2012-01-01
Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).
Refined seismic stratigraphy in prograding carbonates
Pomar, L. )
1991-03-01
Complete exposure of the upper Miocene Reef Complex in the sea cliffs of Mallorca (Spain) allows for a more refined interpretation of seismic lines with similar progradational patterns. A 6 km long high-resolution cross section in the direction of reef progradation displays four hierarchical orders of accretional units. Although all these units are of higher order, they all exhibit similar characteristics as a third order depositional sequence and can likewise be interpreted as the result of high order sea-level cycles. The accretional units are composed of lagoonal horizontal beds, reefal sigmoids and gently dipping slope deposits. They are bounded by erosion surfaces at the top and basinwards by their correlative conformities. These architectural patterns are similar to progradational sequences seen on seismic lines. On seismic lines, the progradational pattern often shows the following geometrical details: (1) discontinuous climbing high-energy reflectors, (2) truncation of clinoforms by these high-energy reflectors with seaward dips, (3) transparent areas intercalated between clinoforms. Based on facies distribution in the outcrops of Mallorca the high-energy reflectors are interpreted as sectors where the erosion surfaces truncated the reef wall and are overlain by lagoonal sediments deposited during the following sealevel rise. The more transparent zones seem to correspond with areas of superposition of undifferentiated lagoonal beds. Offlapping geometries can also be detected in highest quality seismic lines. The comparison between seismic and outcrop data provides a more accurate prediction of lithologies, facies distribution, and reservoir properties on seismic profiles.
Electron beam cold hearth refining in Vallejo
Lowe, J.H.C.
1994-12-31
Electron Beam Cold Hearth Refining Furnace (EBCHR) in Vallejo, California is alive, well, and girding itself for developing new markets. A brief review of the twelve years experience with EBCHR in Vallejo. Acquisition of the Vallejo facility by Axel Johnson Metals, Inc. paves the way for the development of new products and markets. A discussion of some of the new opportunities for the advancement of EBCHR technology. Discussed are advantages to the EBCHR process which include: extended surface area of molten metal exposed to higher vacuum; liberation of insoluble oxide particles to the surface of the melt; higher temperatures that allowed coarse solid particles like carbides and carbonitrides to be suspended in the fluid metal as fine micro-segregates, and enhanced removal of volatile trace impurities like lead, bismuth and cadmium. Future work for the company includes the continued recycling of alloys and also fabricating stainless steel for the piping of chip assembly plants. This is to prevent `killer defects` that ruin a memory chip.
Spatially Refined Aerosol Direct Radiative Forcing Efficiencies
NASA Technical Reports Server (NTRS)
Henze, Daven K.; Shindell, Drew Todd; Akhtar, Farhan; Spurr, Robert J. D.; Pinder, Robert W.; Loughlin, Dan; Kopacz, Monika; Singh, Kumaresh; Shim, Changsub
2012-01-01
Global aerosol direct radiative forcing (DRF) is an important metric for assessing potential climate impacts of future emissions changes. However, the radiative consequences of emissions perturbations are not readily quantified nor well understood at the level of detail necessary to assess realistic policy options. To address this challenge, here we show how adjoint model sensitivities can be used to provide highly spatially resolved estimates of the DRF from emissions of black carbon (BC), primary organic carbon (OC), sulfur dioxide (SO2), and ammonia (NH3), using the example of emissions from each sector and country following multiple Representative Concentration Pathway (RCPs). The radiative forcing efficiencies of many individual emissions are found to differ considerably from regional or sectoral averages for NH3, SO2 from the power sector, and BC from domestic, industrial, transportation and biomass burning sources. Consequently, the amount of emissions controls required to attain a specific DRF varies at intracontinental scales by up to a factor of 4. These results thus demonstrate both a need and means for incorporating spatially refined aerosol DRF into analysis of future emissions scenario and design of air quality and climate change mitigation policies.
Refining clinical diagnosis with likelihood ratios.
Grimes, David A; Schulz, Kenneth F
Likelihood ratios can refine clinical diagnosis on the basis of signs and symptoms; however, they are underused for patients' care. A likelihood ratio is the percentage of ill people with a given test result divided by the percentage of well individuals with the same result. Ideally, abnormal test results should be much more typical in ill individuals than in those who are well (high likelihood ratio) and normal test results should be most frequent in well people than in sick people (low likelihood ratio). Likelihood ratios near unity have little effect on decision-making; by contrast, high or low ratios can greatly shift the clinician's estimate of the probability of disease. Likelihood ratios can be calculated not only for dichotomous (positive or negative) tests but also for tests with multiple levels of results, such as creatine kinase or ventilation-perfusion scans. When combined with an accurate clinical diagnosis, likelihood ratios from ancillary tests improve diagnostic accuracy in a synergistic manner. PMID:15850636
Transhiatal Esophagectomy: Clinical Experience and Refinements
Orringer, Mark B.; Marshall, Becky; Iannettoni, Mark D.
1999-01-01
Objective To review the authors’ clinical experience with transhiatal esophagectomy (THE) and the refinements in this procedure that have evolved. Background Increased use of THE during the past two decades has generated controversy about the merits and safety of this approach compared with transthoracic esophageal resection. The authors’ large THE experience provides a valuable basis for benchmarking data regarding the procedure. Methods The results of THE were analyzed retrospectively using the authors’ prospectively established esophageal resection database and follow-up information on these patients. Results From 1976 to 1998, THE was performed in 1085 patients, 26% with benign disease and 74% with cancer. The procedure was possible in 98.6% of cases. Stomach was the esophageal substitute in 96%. The hospital mortality rate was 4%. Blood loss averaged 689 cc. Major complications were anastomotic leak (13%), atelectasis/pneumonia (2%), intrathoracic hemorrhage, recurrent laryngeal nerve paralysis, chylothorax, and tracheal laceration (<1% each). Actuarial survival of patients with carcinoma equaled or exceeded that reported after transthoracic esophagectomy. Late functional results were good or excellent in 70%. With preoperative pulmonary and physical conditioning, a side-to-side stapled cervical esophagogastric anastomosis (<3% incidence of leak), and postoperative epidural anesthesia, the need for an intensive care unit stay has been eliminated and the length of stay reduced to 7 days. Conclusion THE is possible in most patients requiring esophageal resection and can be performed with greater safety and fewer complications than the traditional transthoracic approaches. PMID:10493486
Refining and blending of aviation turbine fuels.
White, R D
1999-02-01
Aviation turbine fuels (jet fuels) are similar to other petroleum products that have a boiling range of approximately 300F to 550F. Kerosene and No.1 grades of fuel oil, diesel fuel, and gas turbine oil share many similar physical and chemical properties with jet fuel. The similarity among these products should allow toxicology data on one material to be extrapolated to the others. Refineries in the USA manufacture jet fuel to meet industry standard specifications. Civilian aircraft primarily use Jet A or Jet A-1 fuel as defined by ASTM D 1655. Military aircraft use JP-5 or JP-8 fuel as defined by MIL-T-5624R or MIL-T-83133D respectively. The freezing point and flash point are the principle differences between the finished fuels. Common refinery processes that produce jet fuel include distillation, caustic treatment, hydrotreating, and hydrocracking. Each of these refining processes may be the final step to produce jet fuel. Sometimes blending of two or more of these refinery process streams are needed to produce jet fuel that meets the desired specifications. Chemical additives allowed for use in jet fuel are also defined in the product specifications. In many cases, the customer rather than the refinery will put additives into the fuel to meet their specific storage or flight condition requirements. PMID:10189575
Astrocytes refine cortical connectivity at dendritic spines
Risher, W Christopher; Patel, Sagar; Kim, Il Hwan; Uezu, Akiyoshi; Bhagat, Srishti; Wilton, Daniel K; Pilaz, Louis-Jan; Singh Alvarado, Jonnathan; Calhan, Osman Y; Silver, Debra L; Stevens, Beth; Calakos, Nicole; Soderling, Scott H; Eroglu, Cagla
2014-01-01
During cortical synaptic development, thalamic axons must establish synaptic connections despite the presence of the more abundant intracortical projections. How thalamocortical synapses are formed and maintained in this competitive environment is unknown. Here, we show that astrocyte-secreted protein hevin is required for normal thalamocortical synaptic connectivity in the mouse cortex. Absence of hevin results in a profound, long-lasting reduction in thalamocortical synapses accompanied by a transient increase in intracortical excitatory connections. Three-dimensional reconstructions of cortical neurons from serial section electron microscopy (ssEM) revealed that, during early postnatal development, dendritic spines often receive multiple excitatory inputs. Immuno-EM and confocal analyses revealed that majority of the spines with multiple excitatory contacts (SMECs) receive simultaneous thalamic and cortical inputs. Proportion of SMECs diminishes as the brain develops, but SMECs remain abundant in Hevin-null mice. These findings reveal that, through secretion of hevin, astrocytes control an important developmental synaptic refinement process at dendritic spines. DOI: http://dx.doi.org/10.7554/eLife.04047.001 PMID:25517933
Steel refining with an electrochemical cell
Blander, Milton; Cook, Glenn M.
1988-01-01
Apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1988-05-17
Apparatus is described for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom. 2 figs.
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1985-05-21
Disclosed is an apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements. PMID:27362527
Locally Refined Splines Representation for Geospatial Big Data
NASA Astrophysics Data System (ADS)
Dokken, T.; Skytt, V.; Barrowclough, O.
2015-08-01
When viewed from distance, large parts of the topography of landmasses and the bathymetry of the sea and ocean floor can be regarded as a smooth background with local features. Consequently a digital elevation model combining a compact smooth representation of the background with locally added features has the potential of providing a compact and accurate representation for topography and bathymetry. The recent introduction of Locally Refined B-Splines (LR B-splines) allows the granularity of spline representations to be locally adapted to the complexity of the smooth shape approximated. This allows few degrees of freedom to be used in areas with little variation, while adding extra degrees of freedom in areas in need of more modelling flexibility. In the EU fp7 Integrating Project IQmulus we exploit LR B-splines for approximating large point clouds representing bathymetry of the smooth sea and ocean floor. A drastic reduction is demonstrated in the bulk of the data representation compared to the size of input point clouds. The representation is very well suited for exploiting the power of GPUs for visualization as the spline format is transferred to the GPU and the triangulation needed for the visualization is generated on the GPU according to the viewing parameters. The LR B-splines are interoperable with other elevation model representations such as LIDAR data, raster representations and triangulated irregular networks as these can be used as input to the LR B-spline approximation algorithms. Output to these formats can be generated from the LR B-spline applications according to the resolution criteria required. The spline models are well suited for change detection as new sensor data can efficiently be compared to the compact LR B-spline representation.
Refined Freeman-Durden for Harvest Detection using POLSAR data
NASA Astrophysics Data System (ADS)
Taghvakish, Sina
To keep up with an ever increasing human population, providing food is one of the main challenges of the current century. Harvest detection, as an input for decision making, is an important task for food management. Traditional harvest detection methods that rely on field observations need intensive labor, time and money. Therefore, since their introduction in early 60s, optical remote sensing enhanced the process dramatically. But having weaknesses such as cloud cover and temporal resolution, alternative methods were always welcomed. Synthetic Aperture Radar (SAR) on the other hand, with its ability to penetrate cloud cover with the addition of full polarimetric observations could be a good source of data for exploration in agricultural studies. SAR has been used successfully for harvest detection in rice paddy fields. However, harvest detection for other crops without a smooth underlying water surface is much more difficult. The objective of this project is to find a fully-automated algorithm to perform harvest detection using POLSAR image data for soybean and corn. The proposed method is a fusion of Freeman-Durden and H/A/alphadecompositions. The Freeman-Durden algorithm is a decomposition based on three-component physical scattering model. On the other hand, the H/A/alpha parameters are mathematical parameters used to define a three-dimensional space that may be subdivided with scattering mechanism interpretations. The Freeman-Durden model has a symmetric formulation for two of its three scattering mechanisms. On the other hand the surface scattering component used by Freeman-Durden model is only applicable to Bragg surface scattering fields which are not the dominant case in agricultural fields. H/A/alpha can contribute to both of these issues. Based on the RADARSAT-2 images incidence angle, our field based refined Freeman-Durden model and a proposed roughness measure aims to discriminate harvested from senesced crops. We achieved 99.08 percent overall
Evaluation of the tool "Reg Refine" for user-guided deformable image registration.
Johnson, Perry B; Padgett, Kyle R; Chen, Kuan L; Dogan, Nesrin
2016-01-01
"Reg Refine" is a tool available in the MIM Maestro v6.4.5 platform (www.mim-software.com) that allows the user to actively participate in the deformable image registration process. The purpose of this work was to evaluate the efficacy of this tool and investigate strategies for how to apply it effectively. This was done by performing DIR on two publicly available ground-truth models, the Pixel-based Breathing Thorax Model (POPI) for lung, and the Deformable Image Registration Evaluation Project (DIREP) for head and neck. Image noise matched in both magnitude and texture to clinical CBCT scans was also added to each model to simulate the use case of CBCT-CT alignment. For lung, the results showed Reg Refine effective at improving registration accuracy when controlled by an expert user within the context of large lung deformation. CBCT noise was also shown to have no effect on DIR performance while using the MIM algorithm for this site. For head and neck, the results showed CBCT noise to have a large effect on the accuracy of registration, specifically for low-contrast structures such as the brain-stem and parotid glands. In these cases, the Reg Refine tool was able to improve the registration accuracy when controlled by an expert user. Several strategies for how to achieve these results have been outlined to assist other users and provide feedback for developers of similar tools. PMID:27167273
Refining image segmentation by polygon skeletonization
NASA Technical Reports Server (NTRS)
Clarke, Keith C.
1987-01-01
A skeletonization algorithm was encoded and applied to a test data set of land-use polygons taken from a USGS digital land use dataset at 1:250,000. The distance transform produced by this method was instrumental in the description of the shape, size, and level of generalization of the outlines of the polygons. A comparison of the topology of skeletons for forested wetlands and lakes indicated that some distinction based solely upon the shape properties of the areas is possible, and may be of use in an intelligent automated land cover classification system.
Trends in catalysis research to meet future refining needs
Absi-Halabi, M.; Stanislaus, A.; Qabazard, H.
1997-02-01
The main emphasis of petroleum refining during the `70s and early `80s was to maximize conversion of heavy oils to gasoline and middle distillate products. While this objective is still important, the current focus that began in the late `80s is to develop cleaner products. This is a result of strict environmental constraints to reduce emissions from both the products and refineries. Developing catalysts with improved activity, selectivity and stability for use in processes producing such environmentally acceptable fuels is the most economical and effective route for refiners. Novel technologies such as biocatalysis and catalytic membranes are examples of current successful laboratory-scale attempts to resolve anticipated future industry problems. Since catalysts play a key role in refining processes, it is important to examine the challenges facing catalysis research to meet future refining developments. The paper discusses the factors influencing refining, advancements in refining technology and catalysis, short-term future trends in refining catalysts research, and long-term trends in refining catalysts. 56 refs.
21. INTERIOR VIEW OF REFINING MILL, SHOWING LOCATIONS OF NOS. ...
21. INTERIOR VIEW OF REFINING MILL, SHOWING LOCATIONS OF NOS. 1, 2, AND 3 MILLS, LOOKING SOUTH. SOME OF THE MACHINERY IN THIS SECTION HAS BEEN REMOVED. - Clay Spur Bentonite Plant & Camp, Refining Mill, Clay Spur Siding on Burlington Northern Railroad, Osage, Weston County, WY
Solvent refining of lube oils the MP advantage
Jahnke, F.C.
1986-01-01
The current trend in lube oil solvent refining is towards the increased use of MP (N-methyl-2-pyrrolidone) as the solvent. This paper explains why by providing an economic analysis of using MP versus furfural refining. Included are a grassroots comparison; an analysis of converting an existing unit to MP; and a brief review of why MP provides an advantage.
Refining and End Use Study of Coal Liquids
1997-10-01
This report summarizes revisions to the design basis for the linear programing refining model that is being used in the Refining and End Use Study of Coal Liquids. This revision primarily reflects the addition of data for the upgrading of direct coal liquids.
Development, refinement, and testing of a short term solar flare prediction algorithm
NASA Technical Reports Server (NTRS)
Smith, Jesse B., Jr.
1993-01-01
During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.
Refining primary lead by granulation-leaching-electrowinning
NASA Astrophysics Data System (ADS)
Ojebuoboh, F.; Wang, S.; Maccagni, M.
2003-04-01
This article describes the development of a new process in which lead bullion obtained from smelting concentrates is refined by leaching-electrowinning. In the last half century, the challenge to treat and refine lead in order to minimize emissions of lead and lead compounds has intensified. Within the primary lead industry, the treatment aspect has transformed from the sinter-blast furnace model to direct smelting, creating gains in hygiene, environmental control, and efficiency. The refining aspect has remained based on kettle refining, or to a lesser extent, the Betts electrolytic refining. In the mid-1990s, Asarco investigated a concept based on granulating the lead bullion from the blast furnace. The granular material was fed into the Engitec Fluobor process. This work resulted in the operation of a 45 kg/d pilot plant that could produce lead sheets of 99.9% purity.
Image denoising filter based on patch-based difference refinement
NASA Astrophysics Data System (ADS)
Park, Sang Wook; Kang, Moon Gi
2012-06-01
In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.
Reitveld refinement study of PLZT ceramics
Kumar, Rakesh; Bavbande, D. V.; Bafna, V. H.; Mohan, D.; Kothiyal, G. P.; Mishra, R.
2013-02-05
PLZT ceramics of composition Pb{sub 0.93}La{sub 0.07}(Zr{sub 0.60}Ti{sub 0.40})O{sub 3}, have been milled for 6hrs and 24hrs were prepared by solid state synthesis route. The 6hrs milled and 24hrs milled samples are represented as PLZT-6 and PLZT-24 ceramics respectively. X-ray diffraction (XRD) pattern was recorded at room temperature. The XRD pattern has been analyzed by employing Rietveld refinement method. Phase identification shows that all the peaks observed in PLZT-6 and PLZT-24 ceramics could be indexed to P4mm space group with tetragonal symmetry. The unit cell parameters of 6hrs milled PLZT ceramics are found to be a=b=4.0781(5)A and c=4.0938(7)A and for 24hrs milled PLZT ceramics unit cell parameters are a=b=4.0679(4)A and c=4.1010(5)A . The axial ratio c/a and unit cell volume of PLZT-6 are 1.0038 and 68.09(2)A{sup 3} respectively. In PLZT-24 samples, the axial ratio c/a value is 1.0080 which is little more than that of the 6hr milled PLZT sample whereas the unit cell volume decrease to 67.88 (1) A{sup 3}. An average crystallite size was estimated by using Scherrer's formula. Dielectric properties were obtained by measuring the capacitance and tand loss using Stanford LCR meter.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Effective soft-decision demosaicking using directional filtering and embedded artifact refinement
NASA Astrophysics Data System (ADS)
Huang, Wen-Tsung; Chen, Wen-Jan; Tai, Shen-Chuan
2009-04-01
Demosaicking is an interpolation process that transforms a color filter array (CFA) image into a full-color image in a single-sensor imaging pipeline. In all demosaicking techniques, the interpolation of the green components plays a central role in dictating the visual quality of reconstructed images because green light is of maximum sensitivity in the human visual system. Guided by this point, we propose a new soft-decision demosaicking algorithm using directional filtering and embedded artifact refinement. The novelty of this approach is twofold. First, we lift the constraint of the Bayer CFA that results in the absence of diagonal neighboring green color values for directionally recovering diagonal edges. The developed directional interpolation method is fairly robust in dealing with the four edge features, namely, vertical, horizontal, 45-deg diagonal, and 135-deg diagonal. In addition, the proposed embedded refinement scheme provides an efficient way for soft-decision-based algorithms to achieve improved results with fewer computations. We have compared this new approach to six state-of-the-art methods, and it can outstandingly preserve more edge details and handle fine textures well, without requiring a high computational cost.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
Pimentel, Samuel D.; Kelz, Rachel R.; Silber, Jeffrey H.; Rosenbaum, Paul R.
2015-01-01
Every newly trained surgeon performs her first unsupervised operation. How do the health outcomes of her patients compare with the patients of experienced surgeons? Using data from 498 hospitals, we compare 1252 pairs comprised of a new surgeon and an experienced surgeon working at the same hospital. We introduce a new form of matching that matches patients of each new surgeon to patients of an otherwise similar experienced surgeon at the same hospital, perfectly balancing 176 surgical procedures and closely balancing a total of 2.9 million categories of patients; additionally, the individual patient pairs are as close as possible. A new goal for matching is introduced, called “refined covariate balance,” in which a sequence of nested, ever more refined, nominal covariates is balanced as closely as possible, emphasizing the first or coarsest covariate in that sequence. A new algorithm for matching is proposed and the main new results prove that the algorithm finds the closest match in terms of the total within-pair covariate distances among all matches that achieve refined covariate balance. Unlike previous approaches to forcing balance on covariates, the new algorithm creates multiple paths to a match in a network, where paths that introduce imbalances are penalized and hence avoided to the extent possible. The algorithm exploits a sparse network to quickly optimize a match that is about two orders of magnitude larger than is typical in statistical matching problems, thereby permitting much more extensive use of fine and near-fine balance constraints. The match was constructed in a few minutes using a network optimization algorithm implemented in R. An R package called rcbalance implementing the method is available from CRAN. PMID:26273117
DT-REFinD: Diffusion Tensor Registration With Exact Finite-Strain Differential
Vercauteren, Tom; Fillard, Pierre; Peyrat, Jean-Marc; Pennec, Xavier; Golland, Polina; Ayache, Nicholas; Clatz, Olivier
2014-01-01
In this paper, we propose the DT-REFinD algorithm for the diffeomorphic nonlinear registration of diffusion tensor images. Unlike scalar images, deforming tensor images requires choosing both a reorientation strategy and an interpolation scheme. Current diffusion tensor registration algorithms that use full tensor information face difficulties in computing the differential of the tensor reorientation strategy and consequently, these methods often approximate the gradient of the objective function. In the case of the finite-strain (FS) reorientation strategy, we borrow results from the pose estimation literature in computer vision to derive an analytical gradient of the registration objective function. By utilizing the closed-form gradient and the velocity field representation of one parameter subgroups of diffeomorphisms, the resulting registration algorithm is diffeomorphic and fast. We contrast the algorithm with a traditional FS alternative that ignores the reorientation in the gradient computation. We show that the exact gradient leads to significantly better registration at the cost of computation time. Independently of the choice of Euclidean or Log-Euclidean interpolation and sum of squared differences dissimilarity measure, the exact gradient achieves better alignment over an entire spectrum of deformation penalties. Alignment quality is assessed with a battery of metrics including tensor overlap, fractional anisotropy, inverse consistency and closeness to synthetic warps. The improvements persist even when a different reorientation scheme, preservation of principal directions, is used to apply the final deformations. PMID:19556193
REFMAC5 for the refinement of macromolecular crystal structures
Murshudov, Garib N.; Skubák, Pavol; Lebedev, Andrey A.; Pannu, Navraj S.; Steiner, Roberto A.; Nicholls, Robert A.; Winn, Martyn D.; Long, Fei; Vagin, Alexei A.
2011-01-01
This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement tools such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography. PMID:21460454
Querying genomic databases: refining the connectivity map.
Segal, Mark R; Xiong, Hao; Bengtsson, Henrik; Bourgon, Richard; Gentleman, Robert
2012-01-01
constitutes an ordered list. These involve using metrics proposed for analyzing partially ranked data, these being of interest in their own right and not widely used. Secondly, we advance an alternate inferential approach based on generating empirical null distributions that exploit the scope, and capture dependencies, embodied by the database. Using these refinements we undertake a comprehensive re-evaluation of Connectivity Map findings that, in general terms, reveal that accommodating ordered queries is less critical than the mode of inference. PMID:22499690
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Interactive multigrid refinement for deformable image registration.
Zhou, Wu; Xie, Yaoqin
2013-01-01
Deformable image registration is the spatial mapping of corresponding locations between images and can be used for important applications in radiotherapy. Although numerous methods have attempted to register deformable medical images automatically, such as salient-feature-based registration (SFBR), free-form deformation (FFD), and demons, no automatic method for registration is perfect, and no generic automatic algorithm has shown to work properly for clinical applications due to the fact that the deformation field is often complex and cannot be estimated well by current automatic deformable registration methods. This paper focuses on how to revise registration results interactively for deformable image registration. We can manually revise the transformed image locally in a hierarchical multigrid manner to make the transformed image register well with the reference image. The proposed method is based on multilevel B-spline to interactively revise the deformable transformation in the overlapping region between the reference image and the transformed image. The resulting deformation controls the shape of the transformed image and produces a nice registration or improves the registration results of other registration methods. Experimental results in clinical medical images for adaptive radiotherapy demonstrated the effectiveness of the proposed method. PMID:24232828
Refining Pathways: A Model Comparison Approach
Moffa, Giusi; Erdmann, Gerrit; Voloshanenko, Oksana; Hundsrucker, Christian; Sadeh, Mohammad J.; Boutros, Michael; Spang, Rainer
2016-01-01
Cellular signalling pathways consolidate multiple molecular interactions into working models of signal propagation, amplification, and modulation. They are described and visualized as networks. Adjusting network topologies to experimental data is a key goal of systems biology. While network reconstruction algorithms like nested effects models are well established tools of computational biology, their data requirements can be prohibitive for their practical use. In this paper we suggest focussing on well defined aspects of a pathway and develop the computational tools to do so. We adapt the framework of nested effect models to focus on a specific aspect of activated Wnt signalling in HCT116 colon cancer cells: Does the activation of Wnt target genes depend on the secretion of Wnt ligands or do mutations in the signalling molecule β-catenin make this activation independent from them? We framed this question into two competing classes of models: Models that depend on Wnt ligands secretion versus those that do not. The model classes translate into restrictions of the pathways in the network topology. Wnt dependent models are more flexible than Wnt independent models. Bayes factors are the standard Bayesian tool to compare different models fairly on the data evidence. In our analysis, the Bayes factors depend on the number of potential Wnt signalling target genes included in the models. Stability analysis with respect to this number showed that the data strongly favours Wnt ligands dependent models for all realistic numbers of target genes. PMID:27248690
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
GRAIL Refinements to Lunar Seismic Structure
NASA Technical Reports Server (NTRS)
Weber, Renee; Gernero, Edward; Lin, Pei-Ying; Thorne, Michael; Schmerr, Nicholas; Han, Shin-Chan
2012-01-01
such as moonquake location, timing errors, and potential seismic heterogeneities. In addition, the modeled velocities may vary with a 1-to-1 trade ]off with the modeled reflector depth. The GRAIL (Gravity Recovery and Interior Laboratory) mission, launched in Sept. 2011, placed two nearly identical spacecraft in lunar orbit. The two satellites make extremely high-resolution measurements of the lunar gravity field, which can be used to constrain the interior structure of the Moon using a "crust to core" approach. GRAIL fs constraints on crustal thickness, mantle structure, core radius and stratification, and core state (solid vs. molten) will complement seismic investigations in several ways. Here we present a progress report on our efforts to advance our knowledge of the Moon fs internal structure using joint gravity and seismic analyses. We will focus on methodology, including 1) refinements to the seismic core constraint accomplished through array processing of Apollo seismic data, made by applying a set of travel time corrections based on GRAIL structure estimates local to each Apollo seismic station; 2) modeling deep lunar structure through synthetic seismograms, to test whether the seismic core model can reproduce the core reflections observed in the Apollo seismograms; and 3) a joint seismic and gravity inversion in which we attempt to fit a family of seismic structure models with the gravity constraints from GRAIL, resulting in maps of seismic velocities and densities that vary from a nominal model both laterally and with depth.
Rack gasoline and refining margins - wanted: a summer romance
Not Available
1988-04-13
For the first time since late 1987, apparent refining margins on the US benchmark crude oil (based on spot purchase prices) are virtually zero. This felicitous bit of news comes loaded with possibilities of positive (maybe even good.) margins in coming months, if the differential between crude buying prices and the value of the refined barrel continues to improve. What refiners in the US market are watching most closely right now are motorists. This issue also contains the following: (1) ED refining netback data for the US Gulf and Western Coasts, Rotterdam, and Singapore, prices for early April 1988; and (2) ED fuel price/tax series for countries of the Western Hemisphere, April 1988 edition. 5 figures, 5 tables.
Finite element mesh refinement criteria for stress analysis
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1990-01-01
This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.
ENVIRONMENTAL ASSESSMENT REPORT: SOLVENT REFINED COAL (SRC) SYSTEMS
The report is an integrated evaluation of air emissions, water effluents, solid wastes, toxic substances, control/disposal alternatives, environmental regulatory requirements, and environmental effects associated with solvent refined coal (SRC) systems. It considers the SRC-I(sol...
QM/MM X-ray Refinement of Zinc Metalloenzymes
Li, Xue; Hayik, Seth A.; Merz, Kenneth M.
2010-01-01
Zinc metalloenzymes play an important role in biology. However, due to the limitation of molecular force field energy restraints used in X-ray refinement at medium or low resolutions, the precise geometry of the zinc coordination environment can be difficult to distinguish from ambiguous electron density maps. Due to the difficulties involved in defining accurate force fields for metal ions, the QM/MM (Quantum-Mechanical /Molecular-Mechanical) method provides an attractive and more general alternative for the study and refinement of metalloprotein active sites. Herein we present three examples that indicate that QM/MM based refinement yields a superior description of the crystal structure based on R and Rfree values and on the inspection of the zinc coordination environment. It is concluded that QM/MM refinement is a useful general tool for the improvement of the metal coordination sphere in metalloenzyme active sites. PMID:20116858
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Fate of oxidized triglycerides during refining of seed oils.
Gomes, Tommaso; Caponio, Francesco; Delcuratolo, Debora
2003-07-30
The evolution of oxidized triglycerides (ox-TG) during industrial refining was studied in soybean, sunflower, peanut, and corn oils. The analytical techniques used were silica gel column chromatography and high-performance size exclusion chromatography. The decrease in ox-TG during refining (42.3% on average) was accompanied by an increase in triglyceride oligopolymers (TGP). The inverse correlation between the two lipid groups suggests that the decrease in ox-TG during refining was due in part to the occurrence of polymerization reactions. An inverse correlation was also found between the percentage sum of ox-TG + TGP and percent TGP, indicating that a part of the ox-TG also underwent degradation or transformation reactions. On average, almost 58% of the ox-TG remained unchanged during refining and, of the rest, about half was involved in polymerization reactions and half in degradation or transformation reactions. PMID:14705891
Parallel Clustering Algorithms for Structured AMR
Gunney, B T; Wissink, A M; Hysom, D A
2005-10-26
We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN ...
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN PROGRESS. "ARM & HAMMER BAKING SODA WAS MADE HERE FOR OVER 50 YEARS AND THEN SHIPPED ACROSS THE STREET TO THE CHURCH & DWIGHT PLANT ON WILLIS AVE. (ON THE RIGHT IN THIS PHOTO). LAYING ON THE GROUND IN FRONT OF C&D BUILDING IS PART OF AN RBC DRYING TOWER. - Solvay Process Company, Refined Bicarbonate Building, Between Willis & Milton Avenues, Solvay, Onondaga County, NY
The refinement of dose assessment of the THOR BNCT beam.
Lin, Yi-Chun; Liu, Yuan-Hao; Jiang, Shiang-Huei; Liu, Hong-Ming; Chou, Wen-Tsae
2011-12-01
A refined dose assessment method has been used now in the THOR BNCT facility, which takes into account more delicate corrections, carefully handled calibration factors, and the spectrum- and kerma-weighted k(t) value. The refined method solved the previous problem of negative derived neutron dose in phantom at deeper positions. With the improved dose assessment, the calculated and measured gamma-ray dose rates match perfectly in a 15×15×15 cm(3) PMMA phantom. PMID:21377883
Refiners match Rvp reduction measures to operating problems
Musumeci, J.
1997-02-03
Reduction in gasoline vapor pressure specifications have created operational challenges for many refiners. Removal of butanes from gasoline blendstocks has become more critical to meeting product vapor pressure requirements. Some refiners have made major unit modifications, such as adding alkylation capacity for butane conversion. Others have debottlenecked existing fractionation equipment, thus increasing butane removal. Three case studies will illustrate vapor pressure reduction solutions. The solutions include: changing unit operating targets, maintaining existing equipment, and debottlenecking minor equipment.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Ultrasonic Sensor to Characterize Wood Pulp During Refining
Greenwood, Margaret S.; Panetta, Paul D.; Bond, Leonard J.; McCaw, M. W.
2006-12-22
A novel sensor concept has been developed for measuring the consistency, the degree of refining, the water retention value (WRV), and the consistency of wood pulp during the refining process. The measurement time is less than 5 minutes and the sensor can operate in a slip-stream of the process line or as an at-line instrument. The consistency is obtained from a calibration, in which the attenuation of ultrasound through the pulp suspension is measured as a function of the solids weight percentage. The degree of refining and the WRV are determined from settling measurements. The settling of a pulp suspension (consistency less than 0.5 Wt%) is observed, after the mixer that keeps the pulp uniformly distributed is turned off. The attenuation of ultrasound as a function of time is recorded and these data show a peak, after a certain delay, defined as the “peak time.” The degree of refining increases with the peak time, as demonstrated by measuring pulp samples with different degrees of refining. The WRV can be determined using the relative peak time, defined as the ratio T2/T1, where T1 is an initial value of the peak time and T2 is the value after additional refining. This method offers an additional WRV test for the industry, because the freeness test is not specific for the WRV.
Effect of refining on quality and composition of sunflower oil.
Pal, U S; Patra, R K; Sahoo, N R; Bakhara, C K; Panda, M K
2015-07-01
An experimental oil refining unit has been developed and tested for sunflower oil. Crude pressed sunflower oil obtained from a local oil mill was refined using chemical method by degumming, neutralization, bleaching and dewaxing. The quality and composition of crude and refined oil were analysed compared. Reduction in phosphorous content from 6.15 ppm to 0, FFA content from 1.1 to 0.24 % (oleic acid), peroxide value from 22.5 to 7.9 meq/kg, wax content from 1,420 to 200 ppm and colour absorbance value from 0.149 to 0.079 (in spectrophotometer at 460 nm) were observed from crude to refined oil. It was observed that refining did not have significant effect on fatty acid compositions as found in the percentage peak area in the GC-MS chromatogram. The percentage of unsaturated fatty acid in both the oils were recorded to be about 95 % containing 9-Octadecenoic acid (Oleic acid) and 11,14-Eicosadienoic acid (elongated form of linoleic acid). The research results will be useful to small entrepreneurs and farmers for refining of sunflower oil for better marketability. PMID:26139933
Ultrasonic sensor to characterize wood pulp during refining.
Greenwood, M S; Panetta, P D; Bond, L J; McCaw, M W
2006-12-22
A novel sensor concept has been developed for measuring the degree of refining, the water retention value (WRV), and the weight percentage of wood pulp during the refining process. The measurement time is less than 5 min and the sensor can operate in a slip-stream of the process line or as an at-line instrument. The degree of refining and the WRV are determined from settling measurements. The settling of a pulp suspension (with a weight percentage less than 0.5 wt%) is observed, after the mixer, which keeps the pulp uniformly distributed, is turned off. The attenuation of ultrasound as a function of time is recorded and these data show a peak at a time designated as the "peak time." The peak time T increases with the degree of refining, as demonstrated by measuring pulp samples with known degrees of refining. The WRV can be determined using the relative peak time, defined as the ratio T(2)/T(1), where T(1) is an initial peak time and T(2) is the value after additional refining. This method offers an alternative WRV test for the industry to the current time-consuming method. PMID:16920173
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
CrystTwiV: a webserver for automated phase extension and refinement in X-ray crystallography
Thireou, Trias; Atlamazoglou, Vassilis; Levakis, Manolis; Eliopoulos, Elias; Hountas, Athanassios; Tsoucaris, George; Bethanis, Kostas
2007-01-01
An important stage in macromolecular crystallography is that of phase extension and refinement when initial phase estimates are available from isomorphous replacement or anomalous scattering or other methods. For this purpose, an alternative method called the twin variables (TwiV) method has been proposed. The algorithm is based on alternately transferring the phase information between the twin variable sets. The phase extension and refinement is evaluated with the crystallographic symmetry test by deliberately sacrificing the space-group symmetry in the starting set, then using its re-appearance as a criterion for correctness. Here we present a software program (CrysTwiV) that runs on the web (freely available at: http://btweb.aua.gr/crystwiv/) implementing the above-mentioned method. PMID:17488848
Grishaev, Alexander; Ying, Jinfa; Canny, Marella D.; Pardi, Arthur; Bax, Ad
2008-01-01
A procedure is presented for refinement of a homology model of E.Coli tRNAVal, originally based on the X-ray structure of yeast tRNAPhe, using experimental residual dipolar coupling (RDC) and small angle X-ray scattering (SAXS) data. A spherical sampling algorithm is described for refinement against SAXS data that does not require a globbic approximation, which is particularly important for nucleic acids where such approximations are less appropriate. Substantially higher speed of the algorithm also makes its application favorable for proteins. In addition to the SAXS data, the structure refinement employed a sparse set of NMR data consisting of 24 imino N-HN RDCs measured with Pf1 phage alignment, and 20 imino N-HN RDCs obtained from magnetic field dependent alignment of tRNAVal. The refinement strategy aims to largely retain the local geometry of the 58% identical tRNAPhe by ensuring that the atomic coordinates for short, overlapping segments of the ribose-phosphate backbone and the conserved base pairs remain close to those of the starting model. Local coordinate restraints are enforced using the non-crystallographic symmetry (NCS) term in the XPLOR-NIH or CNS software package, while still permitting modest movements of adjacent segments. The RDCs mainly drive the relative orientation of the helical arms, whereas the SAXS restraints ensure an overall molecular shape compatible with experimental scattering data. The resulting structure exhibits good cross-validation statistics (jack-knifed Qfree = 14% for the Pf1 RDCs, compared to 25% for the starting model) and exhibits a larger angle between the two helical arms than observed in the X-ray structure of tRNAPhe, in agreement with previous NMR-based tRNAVal models. PMID:18787959
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Production and Refining of Magnesium Metal from Turkey Originating Dolomite
NASA Astrophysics Data System (ADS)
Demiray, Yeliz; Yücel, Onuralp
2012-06-01
In this study crown magnesium produced from Turkish calcined dolomite by the Pigeon Process was refined and corrosion tests were applied. By using factsage thermodynamic program metalothermic reduction behavior of magnesium oxide and silicate formation structure during this reaction were investigated. After thermodynamic studies were completed, calcination of dolomite and it's metalothermic reduction at temperatures of 1473 K, 1523 K and within a vacuum (varied from 20 to 200 Pa) and refining of crown magnesium was studied. Different flux compositions consisting of MgCl2, KCl, CaCl2, MgO, CaF2, NaCl, and SiO2 with and without B2O3 additions were selected for the refining process. These tests were carried out at 963 K for 15, 30 and 45 minutes setting time. Considerable amount of iron was transferred into the sludge phase and its amount decreased from 0.08% to 0.027%. This refined magnesium was suitable for the production of various magnesium alloys. As a result of decreasing iron content, minimum corrosion rate of refined magnesium was obtained 2.35 g/m2/day. The results are compared with previous studies.
Refinement performance and mechanism of an Al-50Si alloy
Dai, H.S.; Liu, X.F.
2008-11-15
The microstructure and melt structure of primary silicon particles in an Al-50%Si (wt.%) alloy have been investigated by optical microscopy, scanning electron microscopy, electron probe micro-analysis and a high temperature X-ray diffractometer. The results show that the Al-50Si alloy can be effectively refined by a newly developed Si-20P master alloy, and the melting temperature is crucial to the refinement process. The minimal overheating degree {delta}T{sub min} ({delta}T{sub min} is the difference between the minimal overheating temperature T{sub min} and the liquidus temperature T{sub L}) for good refinement is about 260 deg. C. Primary silicon particles can be refined after adding 0.2 wt.% phosphorus amount at sufficient temperature, and their average size transforms from 2-4 mm to about 30 {mu}m. The X-ray diffraction data of the Al-50Si melt demonstrate that structural change occurs when the melting temperature varies from 1100 deg. C to 1300 deg. C. Additionally, the relationship between the refinement mechanism and the melt structure is discussed.
Refined BPS State Counting from Nekrasov's Formula and Macdonald Functions
NASA Astrophysics Data System (ADS)
Awata, Hidetoshi; Kanno, Hiroaki
It has been argued that Nekrasov's partition function gives the generating function of refined BPS state counting in the compactification of M theory on local Calabi-Yau spaces. We show that a refined version of the topological vertex we previously proposed (arXiv:hep-th/0502061) is a building block of Nekrasov's partition function with two equivariant parameters. Compared with another refined topological vertex by Iqbal, Kozcaz and Vafa (arXiv:hep-th/0701156), our refined vertex is expressed entirely in terms of the specialization of the Macdonald symmetric functions which is related to the equivariant character of the Hilbert scheme of points on ℂ2. We provide diagrammatic rules for computing the partition function from the web diagrams appearing in geometric engineering of Yang-Mills theory with eight supercharges. Our refined vertex has a simple transformation law under the flop operation of the diagram, which suggests that homological invariants of the Hopf link are related to the Macdonald functions.
Refined food addiction: a classic substance use disorder.
Ifland, J R; Preuss, H G; Marcus, M T; Rourke, K M; Taylor, W C; Burau, K; Jacobs, W S; Kadish, W; Manso, G
2009-05-01
Overeating in industrial societies is a significant problem, linked to an increasing incidence of overweight and obesity, and the resultant adverse health consequences. We advance the hypothesis that a possible explanation for overeating is that processed foods with high concentrations of sugar and other refined sweeteners, refined carbohydrates, fat, salt, and caffeine are addictive substances. Therefore, many people lose control over their ability to regulate their consumption of such foods. The loss of control over these foods could account for the global epidemic of obesity and other metabolic disorders. We assert that overeating can be described as an addiction to refined foods that conforms to the DSM-IV criteria for substance use disorders. To examine the hypothesis, we relied on experience with self-identified refined foods addicts, as well as critical reading of the literature on obesity, eating behavior, and drug addiction. Reports by self-identified food addicts illustrate behaviors that conform to the 7 DSM-IV criteria for substance use disorders. The literature also supports use of the DSM-IV criteria to describe overeating as a substance use disorder. The observational and empirical data strengthen the hypothesis that certain refined food consumption behaviors meet the criteria for substance use disorders, not unlike tobacco and alcohol. This hypothesis could lead to a new diagnostic category, as well as therapeutic approaches to changing overeating behaviors. PMID:19223127
Single-pass GPU-raycasting for structured adaptive mesh refinement data
NASA Astrophysics Data System (ADS)
Kaehler, Ralf; Abel, Tom
2013-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
NASA Astrophysics Data System (ADS)
Li, Hui; Tang, Yunwei; Liu, Qingjie; Ding, Haifeng; Chen, Yu; Jing, Linhai
2014-11-01
Image segmentation is the basis of object-based information extraction from remote sensing imagery. Image segmentation based on multiple features, multi-scale, and spatial context is one current research focus. The scale parameters selected in the segmentation severely impact on the average size of segments obtained by multi-scale segmentation method, such as the Fractal Network Evolution Approach (FNEA) employed in the eCognition software. It is important for the FNEA method to select an appropriate scale parameter that causes no neither over- nor undersegmentation. A method for scale parameter selection and segments refinement is proposed in this paper by modifying a method proposed by Johnson. In a test on two images, the segmentation maps obtained using the proposed method contain less under-segmentation and over-segmentation than that generated by the Johnson's method. It was demonstrated that the proposed method is effective in scale parameter selection and segment refinement for multi-scale segmentation algorithms, such as the FNEA method.
Kalburgi, P B; Jha, R; Ojha, C S P; Deshannavar, U B
2015-01-01
Stream re-aeration is an extremely important component to enhance the self-purification capacity of streams. To estimate the dissolved oxygen (DO) present in the river, estimation of re-aeration coefficient is mandatory. Normally, the re-aeration coefficient is expressed as a function of several stream variables, such as mean stream velocity, shear stress velocity, bed slope, flow depth and Froude number. Many empirical equations have been developed in the last years. In this work, 13 most popular empirical re-aeration equations, used for re-aeration prediction, have been tested for their applicability in Ghataprabha River system, Karnataka, India, at various locations. Extensive field data were collected during the period March 2008 to February 2009 from seven different sites located in the river to observe re-aeration coefficient using mass balance approach. The performance of re-aeration equations have been evaluated using various error estimations, namely, the standard error (SE), mean multiplicative error (MME), normalized mean error (NME) and correlation statistics. The results show that the predictive equation developed by Jha et al. (Refinement of predictive re-aeration equations for a typical Indian river. Hydrological Process. 2001;15(6):1047-1060), for a typical Indian river, yielded the best agreement with the values of SE, MME, NME and correlation coefficient r. Furthermore, a refined predictive equation has been developed for river Ghataprabha using least-squares algorithm that minimizes the error estimates. PMID:25409586
NASA Astrophysics Data System (ADS)
Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.
2016-06-01
Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulation due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.
NASA Astrophysics Data System (ADS)
Fromang, S.; Hennebelle, P.; Teyssier, R.
2006-10-01
Aims. In this paper, we present a new method to perform numerical simulations of astrophysical MHD flows using the Adaptive Mesh Refinement framework and Constrained Transport. Methods: . The algorithm is based on a previous work in which the MUSCL-Hancock scheme was used to evolve the induction equation. In this paper, we detail the extension of this scheme to the full MHD equations and discuss its properties. Results: . Through a series of test problems, we illustrate the performances of this new code using two different MHD Riemann solvers (Lax-Friedrich and Roe) and the need of the Adaptive Mesh Refinement capabilities in some cases. Finally, we show its versatility by applying it to two completely different astrophysical situations well studied in the past years: the growth of the magnetorotational instability in the shearing box and the collapse of magnetized cloud cores. Conclusions: . We have implemented a new Godunov scheme to solve the ideal MHD equations in the AMR code RAMSES. We have shown that it results in a powerful tool that can be applied to a great variety of astrophysical problems, ranging from galaxies formation in the early universe to high resolution studies of molecular cloud collapse in our galaxy.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Pillowing doublets: Refining a mesh to ensure that faces share at most one edge
Mitchell, S.A.; Tautges, T.J.
1995-11-01
Occasionally one may be confronted by a hexahedral or quadrilateral mesh containing doublets, two faces sharing two edges. In this case, no amount of smoothing will produce a mesh with agreeable element quality: in the planar case, one of these two faces will always have an angle of at least 180 degrees between the two edges. The authors describe a robust scheme for refining a hexahedral or quadrilateral mesh to separate such faces, so that any two faces share at most one edge. Note that this also ensures that two hexahedra share at most one face in the three dimensional case. The authors have implemented this algorithm and incorporated it into the CUBIT mesh generation environment developed at Sandia National Laboratories.
Generalization and refinement of an automatic landing system capable of curved trajectories
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1976-01-01
Refinements in the lateral and longitudinal guidance for an automatic landing system capable of curved trajectories were studied. Wing flaps or drag flaps (speed brakes) were found to provide faster and more precise speed control than autothrottles. In the case of the lateral control it is shown that the use of the integral of the roll error in the roll command over the first 30 to 40 seconds of flight reduces the sensitivity of the lateral guidance to the gain on the azimuth guidance angle error in the roll command. Also, changes to the guidance algorithm are given that permit pi-radian approaches and constrain the airplane to fly in a specified plane defined by the position of the airplane at the start of letdown and the flare point.
Toward parallel, adaptive mesh refinement for chemically reacting flow simulations
Devine, K.D.; Shadid, J.N.; Salinger, A.G. Hutchinson, S.A.; Hennigan, G.L.
1997-12-01
Adaptive numerical methods offer greater efficiency than traditional numerical methods by concentrating computational effort in regions of the problem domain where the solution is difficult to obtain. In this paper, the authors describe progress toward adding mesh refinement to MPSalsa, a computer program developed at Sandia National laboratories to solve coupled three-dimensional fluid flow and detailed reaction chemistry systems for modeling chemically reacting flow on large-scale parallel computers. Data structures that support refinement and dynamic load-balancing are discussed. Results using uniform refinement with mesh sequencing to improve convergence to steady-state solutions are also presented. Three examples are presented: a lid driven cavity, a thermal convection flow, and a tilted chemical vapor deposition reactor.
RNA Structure Refinement using the ERRASER-Phenix pipeline
Chou, Fang-Chieh; Echols, Nathaniel; Terwilliger, Thomas C.; Das, Rhiju
2015-01-01
Summary The final step of RNA crystallography involves the fitting of coordinates into electron density maps. The large number of backbone atoms in RNA presents a difficult and tedious challenge, particularly when experimental density is poor. The ERRASER-Phenix pipeline can improve an initial set of RNA coordinates automatically based on a physically realistic model of atomic-level RNA interactions. The pipeline couples diffraction-based refinement in Phenix with the Rosetta-based real-space refinement protocol ERRASER (Enumerative Real-Space Refinement ASsisted by Electron density under Rosetta). The combination of ERRASER and Phenix can improve the geometrical quality of RNA crystallographic models while maintaining or improving the fit to the diffraction data (as measured by Rfree). Here we present a complete tutorial for running ERRASER-Phenix through the Phenix GUI, from the command-line, and via an application in the Rosetta On-line Server that Includes Everyone (ROSIE). PMID:26227049
Lessons Learned From Using Focus Groups to Refine Digital Interventions
Avis, Jillian LS; van Mierlo, Trevor; Fournier, Rachel
2015-01-01
There is growing interest in applying novel eHealth approaches for the prevention and management of various health conditions, with the ultimate goal of increasing positive patient outcomes and improving the effectiveness and efficiency of health services delivery. Coupled with the use of innovative approaches is the possibility for adverse outcomes, highlighting the need to strategically refine digital practices prior to implementation with patients. One appropriate method for modification purposes includes focus groups. Although it is a well-established method in qualitative research, there is a lack of guidance regarding the use of focus groups for digital intervention refinement. To address this gap, the purpose of our paper is to highlight several lessons our research team has learned in using focus groups to help refine digital interventions prior to use with patients. PMID:26232313
Construction and Application of a Refined Hospital Management Chain.
Lihua, Yi
2016-01-01
Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain PMID:27180468
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Optical measurement of pulp quantity in a rotating disc refiner
NASA Astrophysics Data System (ADS)
Alahautala, Taito; Lassila, Erkki; Hernberg, Rolf; Härkönen, Esko; Vuorio, Petteri
2004-11-01
An optical method based on light extinction was used in measuring pulp quantity in the plate gap of a 10 MW thermomechanical pulping refiner for the first time. The relationship between pulp quantity and light extinction was determined by empirical laboratory experiments. The empirical relationship was then applied to interpret the image data obtained from field measurements. The results show the local distribution of pulp in the refiner plate gap for different rotor plate positions and refiner operation points. The maximum relative uncertainty in the measured pulp quantity was 50%. Relative pulp distributions were measured at higher accuracy. The measurements have influenced the development of a laser-based optical diagnostic method that can be applied to the quantitative visualization of technically demanding industrial processes.
Alloy performance in high temperature oil refining environments
Sorell, G.; Humphries, M.J.; McLaughlin, J.E.
1995-12-31
The performance of steels and alloys in high temperature petroleum refining applications is strongly influenced by detrimental interactions with aggressive process environments. These are encountered in conventional refining processes and especially in processing schemes for fuels conversion and upgrading. Metal-environment interactions can shorten equipment life and cause impairment of mechanical properties, metallurgical stability and weldability. Corrosion and other high temperature attack modes discussed are sulfidation, hydrogen attack, carburization, and metal dusting. Sulfidation is characterized by bulky scales that are generally ineffective corrosion barriers. Metal loss is often accompanied by sub-surface sulfide penetration. Hydrogen attack and carburization proceed without metal loss and are detectable only by metallographic examination. In advanced stages, these deterioration modes cause severe impairment of mechanical properties. Harmful metal-environment interactions are characterized and illustrated with data drawn from test exposures and plant experience. Alloys employed for high temperature oil refining equipment are identified, including some promising newcomers.
The US petroleum refining industry in the 1980's
Not Available
1990-10-11
As part of the EIA program on petroleum, The US Petroleum Refining Industry in the 1980's, presents a historical analysis of the changes that took place in the US petroleum refining industry during the 1980's. It is intended to be of interest to analysts in the petroleum industry, state and federal government officials, Congress, and the general public. The report consists of six chapters and four appendices. Included is a detailed description of the major events and factors that affected the domestic refining industry during this period. Some of the changes that took place in the 1980's are the result of events that started in the 1970's. The impact of these events on US refinery configuration, operations, economics, and company ownership are examined. 23 figs., 11 tabs.
Survey shows over 1,000 refining catalysts
Rhodes, A.K.
1991-10-14
The Journal's latest survey of worldwide refining catalysts reveals that there are more than 1,040 unique catalyst designations in commercial use in 19 processing categories - an increase of some 140 since the compilation of refining catalysts was last published. As a matter of interest, some 700 catalysts were determined during the first survey. The processing categories surveyed in this paper are: Catalytic naphtha reforming. Dimerization, Isomerization (C{sub 4}), Isomerization (C{sub 5} and C{sub 6}), Isomerization (xylenes), Fluid catalytic cracking (FCC), Hydrocracking, Mild hydrocracking, hydrotreating/hydrogenation/ saturation, Hydrorefining, Polymerization, Sulfur (elemental) recovery, Steam hydrocarbon reforming, Sweetening, Clause unit tail gas treatment, Oxygenates, Combustion promoters (FCC), Sulfur oxides reduction (FCC), and Other refining processes.
Refiners discuss fluid catalytic cracking at technology meeting
1995-04-24
At the National Petroleum Refiners Association`s question and answer session on refining and petrochemical technology, engineers and technical specialists from around the world gather each year to exchange experience and information on refining and petrochemical issues. Fluid catalytic cracking (FCC) catalysts were a topic of great interest at the most recent NPRA Q and A session, held Oct. 11--13, 1994, in Washington, DC. The discussions of FCC catalysts included questions about: reduction of olefins in FCC naphtha; tolerance of FCC catalysts to oxygen enrichment; use of mild hydrocracking catalyst in an FCC feed hydrotreater. At this renowned meeting, a panel of industry representatives answers presubmitted questions. Moderator and NPRA technical director Terrence S. Higgins then invites audience members to respond or ask additional questions on the subjects under discussion. This paper presents the discussions of the above three topics.
Segregation Coefficients of Impurities in Selenium by Zone Refining
NASA Technical Reports Server (NTRS)
Su, Ching-Hua; Sha, Yi-Gao
1998-01-01
The purification of Se by zone refining process was studied. The impurity solute levels along the length of a zone-refined Se sample were measured by spark source mass spectrographic analysis. By comparing the experimental concentration levels with theoretical curves the segregation coefficient, defined as the ratio of equilibrium concentration of a given solute in the solid to that in the liquid, k = x(sub s)/x(sub l) for most of the impurities in Se are found to be close to unity, i.e., between 0.85 and 1.15, with the k value for Si, Zn, Fe, Na and Al greater than 1 and that for S, Cl, Ca, P, As, Mn and Cr less than 1. This implies that a large number of passes is needed for the successful implementation of zone refining in the purification of Se.
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.
Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A
2016-04-12
The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired. PMID:26930135
NASA Astrophysics Data System (ADS)
Zhu, Teng; Yu, Jie; Li, Xiaojuan; Yang, Jie
2015-01-01
To solve the problem that the H/α-Wishart unsupervised classification algorithm can generate only inflexible clusters due to arbitrarily fixed zone boundaries in the clustering processing, a refined fuzzy logic based classification scheme called the H/α-Wishart fuzzy clustering algorithm is proposed in this paper. A fuzzy membership function was developed for the degree of pixels belonging to each class instead of an arbitrary boundary. To devise a unified fuzzy function, a normalized Wishart distance is proposed during the clustering step in the new algorithm. Then the degree of membership is computed to implement fuzzy clustering. After an iterative procedure, the algorithm yields a classification result. The new classification scheme is applied to two L-band polarimetric synthetic aperture radar (PolSAR) images and an X-band high-resolution PolSAR image of a field in LingShui, Hainan Province, China. Experimental results show that the classification precision of the refined algorithm is greater than that of the H/α-Wishart algorithm and that the refined algorithm performs well in differentiating shadows and water areas.
Diffraction-geometry refinement in the DIALS framework.
Waterman, David G; Winter, Graeme; Gildea, Richard J; Parkhurst, James M; Brewster, Aaron S; Sauter, Nicholas K; Evans, Gwyndaf
2016-04-01
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135
Laser furnace and method for zone refining of semiconductor wafers
NASA Technical Reports Server (NTRS)
Griner, Donald B. (Inventor); zur Burg, Frederick W. (Inventor); Penn, Wayne M. (Inventor)
1988-01-01
A method of zone refining a crystal wafer (116 FIG. 1) comprising the steps of focusing a laser beam to a small spot (120) of selectable size on the surface of the crystal wafer (116) to melt a spot on the crystal wafer, scanning the small laser beam spot back and forth across the surface of the crystal wafer (116) at a constant velocity, and moving the scanning laser beam across a predetermined zone of the surface of the crystal wafer (116) in a direction normal to the laser beam scanning direction and at a selectible velocity to melt and refine the entire crystal wafer (116).
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; Parkhurst, James M.; Brewster, Aaron S.; Sauter, Nicholas K.; Evans, Gwyndaf
2016-01-01
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135
NASA Technical Reports Server (NTRS)
Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.
1991-01-01
An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.
Vay, J.-L.; Adam, J.-C.; Heron, A.
2003-09-24
We present an extension of the Berenger Perfectly Matched Layer with additional terms and tunable coefficients which introduce some asymmetry in the absorption rate. We show that the discretized version of the new PML offers superior absorption rates than the discretized standard PML under a plane wave analysis. Taking advantage of the high rates of absorption of the new PML, we have devised a new strategy for introducing the technique of Mesh Refinement into electromagnetic Particle-In-Cell plasma simulations. We present the details of the algorithm as well as a 2-D example of its application to laser-plasma interaction in the context of fast ignition.
Numerical relativity simulations of neutron star merger remnants using conservative mesh refinement
NASA Astrophysics Data System (ADS)
Dietrich, Tim; Bernuzzi, Sebastiano; Ujevic, Maximiliano; Brügmann, Bernd
2015-06-01
We study equal- and unequal-mass neutron star mergers by means of new numerical relativity simulations in which the general relativistic hydrodynamics solver employs an algorithm that guarantees mass conservation across the refinement levels of the computational mesh. We consider eight binary configurations with total mass M =2.7 M⊙, mass ratios q =1 and q =1.16 , four different equations of state (EOSs) and one configuration with a stiff EOS, M =2.5 M⊙ and q =1.5 , which is one of the largest mass ratios simulated in numerical relativity to date. We focus on the postmerger dynamics and study the merger remnant, the dynamical ejecta, and the postmerger gravitational wave spectrum. Although most of the merger remnants are a hypermassive neutron star collapsing to a black hole+disk system on dynamical time scales, stiff EOSs can eventually produce a stable massive neutron star. During the merger process and on very short time scales, about ˜10-3- 10-2M⊙ of material become unbound with kinetic energies ˜1050 erg . Ejecta are mostly emitted around the orbital plane and favored by large mass ratios and softer EOS. The postmerger wave spectrum is mainly characterized by the nonaxisymmetric oscillations of the remnant neutron star. The stiff EOS configuration consisting of a 1.5 M⊙ and a 1.0 M⊙ neutron star, simulated here for the first time, shows a rather peculiar dynamics. During merger the companion star is very deformed; about ˜0.03 M⊙ of the rest mass becomes unbound from the tidal tail due to the torque generated by the two-core inner structure. The merger remnant is a stable neutron star surrounded by a massive accretion disk of rest mass ˜0.3 M⊙. This and similar configurations might be particularly interesting for electromagnetic counterparts. Comparing results obtained with and without the conservative mesh refinement algorithm, we find that postmerger simulations can be affected by systematic errors if mass conservation is not enforced in the
Friends-of-friends galaxy group finder with membership refinement. Application to the local Universe
NASA Astrophysics Data System (ADS)
Tempel, E.; Kipper, R.; Tamm, A.; Gramann, M.; Einasto, M.; Sepp, T.; Tuvikene, T.
2016-04-01
Context. Groups form the most abundant class of galaxy systems. They act as the principal drivers of galaxy evolution and can be used as tracers of the large-scale structure and the underlying cosmology. However, the detection of galaxy groups from galaxy redshift survey data is hampered by several observational limitations. Aims: We improve the widely used friends-of-friends (FoF) group finding algorithm with membership refinement procedures and apply the method to a combined dataset of galaxies in the local Universe. A major aim of the refinement is to detect subgroups within the FoF groups, enabling a more reliable suppression of the fingers-of-God effect. Methods: The FoF algorithm is often suspected of leaving subsystems of groups and clusters undetected. We used a galaxy sample built of the 2MRS, CF2, and 2M++ survey data comprising nearly 80 000 galaxies within the local volume of 430 Mpc radius to detect FoF groups. We conducted a multimodality check on the detected groups in search for subgroups. We furthermore refined group membership using the group virial radius and escape velocity to expose unbound galaxies. We used the virial theorem to estimate group masses. Results: The analysis results in a catalogue of 6282 galaxy groups in the 2MRS sample with two or more members, together with their mass estimates. About half of the initial FoF groups with ten or more members were split into smaller systems with the multimodality check. An interesting comparison to our detected groups is provided by another group catalogue that is based on similar data but a completely different methodology. Two thirds of the groups are identical or very similar. Differences mostly concern the smallest and largest of these other groups, the former sometimes missing and the latter being divided into subsystems in our catalogue. The catalogues are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What provisions are available to a large refiner that acquires a small refiner or one or more of its refineries? 80.555 Section 80.555... Marine Fuel Small Refiner Hardship Provisions § 80.555 What provisions are available to a large...
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Semioptimal practicable algorithmic cooling
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-15
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
The Levitational Zone Refining (LZR) of photovoltaic silicon
NASA Astrophysics Data System (ADS)
Hukin, D. A.
1990-07-01
The horizontal zone refining of silicon by induction heating within a water cooled segmented copper boat has produced material with average solar efficiences of 11.3% (max. 14%). The LZR process is totally non-contaminating and produces ingots 125 mm square up to 2 m long, with low carbon and undetectable oxygen content.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Examining the Aging Semantic Differential: Suggestions for Refinement.
ERIC Educational Resources Information Center
Polizzi, Kenneth G.; Steitz, Jean A.
1998-01-01
Review of studies using the Aging Semantic Differential to measure attitudes toward the elderly identified problems: familiarity and variety of objects, men-only design, and age of the instrument. Ways to refine it include updating adjectives and their positions, identifying attitudinal objects, and accounting for gender differences. (SK)
Use of intensity quotients and differences in absolute structure refinement
Parsons, Simon; Flack, Howard D.; Wagner, Trixie
2013-01-01
Several methods for absolute structure refinement were tested using single-crystal X-ray diffraction data collected using Cu Kα radiation for 23 crystals with no element heavier than oxygen: conventional refinement using an inversion twin model, estimation using intensity quotients in SHELXL2012, estimation using Bayesian methods in PLATON, estimation using restraints consisting of numerical intensity differences in CRYSTALS and estimation using differences and quotients in TOPAS-Academic where both quantities were coded in terms of other structural parameters and implemented as restraints. The conventional refinement approach yielded accurate values of the Flack parameter, but with standard uncertainties ranging from 0.15 to 0.77. The other methods also yielded accurate values of the Flack parameter, but with much higher precision. Absolute structure was established in all cases, even for a hydrocarbon. The procedures in which restraints are coded explicitly in terms of other structural parameters enable the Flack parameter to correlate with these other parameters, so that it is determined along with those parameters during refinement. PMID:23719469
Some refinements of the theory of the viscous screw pump.
NASA Technical Reports Server (NTRS)
Elrod, H. G.
1972-01-01
Recently performed analysis for herringbone thrust bearings has been incorporated into the theory of the viscous screw pump for Newtonian fluids. In addition, certain earlier corrections for sidewall and channel curvature effects have been simplified. The result is a single, refined formula for the prediction of the pressure-flow relation for these pumps.
Use of intensity quotients and differences in absolute structure refinement.
Parsons, Simon; Flack, Howard D; Wagner, Trixie
2013-06-01
Several methods for absolute structure refinement were tested using single-crystal X-ray diffraction data collected using Cu Kα radiation for 23 crystals with no element heavier than oxygen: conventional refinement using an inversion twin model, estimation using intensity quotients in SHELXL2012, estimation using Bayesian methods in PLATON, estimation using restraints consisting of numerical intensity differences in CRYSTALS and estimation using differences and quotients in TOPAS-Academic where both quantities were coded in terms of other structural parameters and implemented as restraints. The conventional refinement approach yielded accurate values of the Flack parameter, but with standard uncertainties ranging from 0.15 to 0.77. The other methods also yielded accurate values of the Flack parameter, but with much higher precision. Absolute structure was established in all cases, even for a hydrocarbon. The procedures in which restraints are coded explicitly in terms of other structural parameters enable the Flack parameter to correlate with these other parameters, so that it is determined along with those parameters during refinement. PMID:23719469
Lactation and neonatal nutrition: Defining and refining the critical questions
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper resulted from a conference entitled "Lactation and Milk: Defining and Refining the Critical Questions" held at the University of Colorado School of Medicine from January 18-20, 2012. The mission of the conference was to identify unresolved questions and set future goals for research into ...
40 CFR 80.1620 - Small refiner definition.
Code of Federal Regulations, 2014 CFR
2014-07-01
... companies, and all joint venture partners. (3) Had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (c) The number of employees and crude oil capacity under... and crude oil capacity of any subsidiary companies, any parent company and subsidiaries of the...
Refinement of a Chemistry Attitude Measure for College Students
ERIC Educational Resources Information Center
Xu, Xiaoying; Lewis, Jennifer E.
2011-01-01
This work presents the evaluation and refinement of a chemistry attitude measure, Attitude toward the Subject of Chemistry Inventory (ASCI), for college students. The original 20-item and revised 8-item versions of ASCI (V1 and V2) were administered to different samples. The evaluation for ASCI had two main foci: reliability and validity. This…
Crisis and Survival in Western European Oil Refining.
ERIC Educational Resources Information Center
Pinder, David A.
1986-01-01
In recent years, oil refining in Western Europe has experienced a period of intense contraction. Discussed are the nature of the crisis, defensive strategies that have been adopted, the spatial consequences of the strategies, and how effective they have been in combatting the root causes of crises. (RM)
Refining King and Baxter Magolda's Model of Intercultural Maturity
ERIC Educational Resources Information Center
Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.
2015-01-01
This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…
AIR EMISSIONS FROM COMBUSTION OF SOLVENT REFINED COAL
The report gives details of a Solvent Refined Coal (SRC) combustion test at Georgia Power Company's Plant Mitchell, March, May, and June 1977. Flue gas samples were collected for modified EPA Level 1 analysis; analytical results are reported. Air emissions from the combustion of ...
Energy Efficiency Improvement in the Petroleum RefiningIndustry
Worrell, Ernst; Galitsky, Christina
2005-05-01
Information has proven to be an important barrier inindustrial energy efficiency improvement. Voluntary government programsaim to assist industry to improve energy efficiency by supplyinginformation on opportunities. ENERGY STAR(R) supports the development ofstrong strategic corporate energy management programs, by providingenergy management information tools and strategies. This paper summarizesENERGY STAR research conducted to develop an Energy Guide for thePetroleum Refining industry. Petroleum refining in the United States isthe largest in the world, providing inputs to virtually every economicsector, including the transport sector and the chemical industry.Refineries spend typically 50 percent of the cash operating costs (e.g.,excluding capital costs and depreciation) on energy, making energy amajor cost factor and also an important opportunity for cost reduction.The petroleum refining industry consumes about 3.1 Quads of primaryenergy, making it the single largest industrial energy user in the UnitedStates. Typically, refineries can economically improve energy efficiencyby 20 percent. The findings suggest that given available resources andtechnology, there are substantial opportunities to reduce energyconsumption cost-effectively in the petroleum refining industry whilemaintaining the quality of the products manufactured.
SPINVERT: a program for refinement of paramagnetic diffuse scattering data.
Paddison, Joseph A M; Stewart, J Ross; Goodwin, Andrew L
2013-11-13
We present a program (spinvert; http://spinvert.chem.ox.ac.uk) for refinement of magnetic diffuse scattering data for frustrated magnets, spin liquids, spin glasses, and other magnetically disordered materials. The approach uses reverse Monte Carlo refinement to fit a large configuration of spins to experimental powder neutron diffraction data. Despite fitting to spherically averaged data, this approach allows the recovery of the three-dimensional magnetic diffuse scattering pattern and the spin-pair correlation function. We illustrate the use of the spinvert program with two case studies. First, we use simulated powder data for the canonical antiferromagnetic Heisenberg model on the kagome lattice to discuss the sensitivity of spinvert refinement to both pairwise and higher-order spin correlations. The effect of limited experimental data on the results is also considered. Second, we re-analyse published experimental data on the frustrated system Y0.5Ca0.5BaCo4O7. The results from spinvert refinement indicate similarities between Y0.5Ca0.5BaCo4O7 and its parent compound YBaCo4O7, which were overlooked in previous analyses using powder data. PMID:24140881
Proof of concept test and evaluation, Lasentec refining sensor
Anderson, J.E.
1991-07-01
The Scanning Laser Microscopes (SLM) LAB-TEC 150 and PAR-TEC 200, were evaluated as instruments for monitoring fiber development during refining. The LAB-TEC 150 did not produce repeatable results which could be related to fiber development, as measured by Canadian Standard Freeness or hand sheet physical strength properties. The PAR-TEC 200 was found to correlate to strength development (Burst and Tensile Indices) during the first stages of laboratory Valley beating of bleached hardwood and softwood pulps. Preliminary testing of the PAR-TEC 200 in a pilot scale refining circuit was inconclusive. The influence of several process variables on instrument readings was investigated including flow rate, probe position and consistency. It is likely that a dual sensor system would be required in a commercial mill environment, to eliminate the effect of process variables. The next phase of the evaluation and development program will include two investigations: (1) A more scientific evaluation of which changes in fiber morphology the sensor is detecting during refining, and (2) a continuation of the in-line development; with a goal of eliminating process flow variables, and more accurately monitoring fiber development, by the use of two sensors, one before and after refining.
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
A Refined Item Digraph Analysis of a Proportional Reasoning Test.
ERIC Educational Resources Information Center
Bart, William M.; Williams-Morris, Ruth
1990-01-01
Refined item digraph analysis (RIDA) is a way of studying diagnostic and prescriptive testing. It permits assessment of a test item's diagnostic value by examining the extent to which the item has properties of ideal items. RIDA is illustrated with the Orange Juice Test, which assesses the proportionality concept. (TJH)
AGRICULTURAL RUNOFF MANAGEMENT (ARM) MODEL VERSION II: REFINEMENT AND TESTING
The Agricultural Runoff Management (ARM) Model has been refined and tested on small agricultural watersheds in Georgia and Michigan. The ARM Model simulates the hydrologic, sediment production, pesticide, and nutrient processes on the land surface and in the soil profile that det...
Melting of uranium-contaminated metal cylinders by electroslag refining
Uda, T.; Ozawa, Y.; Iba, H.
1987-12-01
Melt refining as a means of uranium decontamination of metallic wastes by electroslag refining was examined. Electroslag refining was selected because it is easy to scale up to the necessary industrial levels. Various thicknesses of iron and aluminum cylinders with uranium concentrations close to actual metallic wastes were melted by adding effective fluxes for decontamination. Thin-walled iron and aluminum cylinders with a fill ratio (electrode/mold cross-section ratio) of 0.05 could be melted, and the energy efficiency obtained was 16 to 25%. The ingot uranium concentration of the iron obtained was 0.01 to 0.015 ppm, which was close to the contamination level of the as-received specimen, while for aluminum it was 3 to 5 ppm, which was a few times higher than the as-received specimen contamination level of --0.9 ppm. To melt a thin aluminum cylinder in a steady state, with this fill ratio of 0.05, instantaneous electrode driving response control was desired. Electroslag refining gave better decontamination and energy economization results than by a resistance furnace.
Process for electroslag refining of uranium and uranium alloys
Lewis, P.S. Jr.; Agee, W.A.; Bullock, J.S. IV; Condon, J.B.
1975-07-22
A process is described for electroslag refining of uranium and uranium alloys wherein molten uranium and uranium alloys are melted in a molten layer of a fluoride slag containing up to about 8 weight percent calcium metal. The calcium metal reduces oxides in the uranium and uranium alloys to provide them with an oxygen content of less than 100 parts per million. (auth)
Nucleation mechanisms of refined alpha microstructure in beta titanium alloys
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
Due to a great combination of physical and mechanical properties, beta titanium alloys have become promising candidates in the field of chemical industry, aerospace and biomedical materials. The microstructure of beta titanium alloys is the governing factor that determines their properties and performances, especially the size scale, distribution and volume fraction of precipitate phase in parent phase matrix. Therefore in order to enhance the performance of beta titanium alloys, it is critical to obtain a thorough understanding of microstructural evolution in beta titanium alloys upon various thermal and/or mechanical processes. The present work is focusing on the study of nucleation mechanisms of refined alpha microstructure and super-refined alpha microstructure in beta titanium alloys in order to study the influence of instabilities within parent phase matrix on precipitates nucleation, including compositional instabilities and/or structural instabilities. The current study is primarily conducted in Ti-5Al-5Mo-5V-3Cr (wt%, Ti-5553), a commercial material for aerospace application. Refined and super-refined precipitates microstructure in Ti-5553 are obtained under specific accurate temperature controlled heat treatments. The characteristics of either microstructure are investigated in details using various characterization techniques, such as SEM, TEM, STEM, HRSTEM and 3D atom probe to describe the features of microstructure in the aspect of morphology, distribution, structure and composition. Nucleation mechanisms of refined and super-refined precipitates are proposed in order to fully explain the features of different precipitates microstructure in Ti-5553. The necessary thermodynamic conditions and detailed process of phase transformations are introduced. In order to verify the reliability of proposed nucleation mechanisms, thermodynamic calculation and phase field modeling simulation are accomplished using the database of simple binary Ti-Mo system
Procedures and computer programs for telescopic mesh refinement using MODFLOW
Leake, Stanley A.; Claar, David V.
1999-01-01
Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.
The use of Fourier reverse transforms in crystallographic phase refinement
Ringrose, S.
1997-10-08
Often a crystallographer obtains an electron density map which shows only part of the structure. In such cases, the phasing of the trial model is poor enough that the electron density map may show peaks in some of the atomic positions, but other atomic positions are not visible. There may also be extraneous peaks present which are not due to atomic positions. A method for determination of crystal structures that have resisted solution through normal crystallographic methods has been developed. PHASER is a series of FORTRAN programs which aids in the structure solution of poorly phased electron density maps by refining the crystallographic phases. It facilitates the refinement of such poorly phased electron density maps for difficult structures which might otherwise not be solvable. The trial model, which serves as the starting point for the phase refinement, may be acquired by several routes such as direct methods or Patterson methods. Modifications are made to the reverse transform process based on several assumptions. First, the starting electron density map is modified based on the fact that physically the electron density map must be non-negative at all points. In practice a small positive cutoff is used. A reverse Fourier transform is computed based on the modified electron density map. Secondly, the authors assume that a better electron density map will result by using the observed magnitudes of the structure factors combined with the phases calculated in the reverse transform. After convergence has been reached, more atomic positions and less extraneous peaks are observed in the refined electron density map. The starting model need not be very large to achieve success with PHASER; successful phase refinement has been achieved with a starting model that consists of only 5% of the total scattering power of the full molecule. The second part of the thesis discusses three crystal structure determinations.
AMR++: A design for parallel object-oriented adaptive mesh refinement
Quinlan, D.
1997-11-01
Adaptive mesh refinement computations are complicated by their dynamic nature. In the serial environment they require substantial infrastructures to support the regridding processes, intergrid operations, and local bookkeeping of positions of grids relative to one another. In the parallel environment the dynamic behavior is more problematic because it requires dynamic distribution support and load balancing. Parallel AMR is further complicated by the substantial task parallelism, in addition to the obvious data parallelism, this task parallelism requires additional infrastructure to support efficiently. The degree of parallelism is typically dependent upon the algorithms in use and the equations being solved. Different algorithms have significant compromises between computation and communication. Substantial research work is often required to define efficient methods and suitable infrastructure. The purpose of this paper is to introduce AMR++ as an object-oriented library which forms a part of the OVERTURE framework, a much larger object-oriented numerical framework developed and supported at Los Alamos National Laboratory and distributed on the Web for the last several years.
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Refinement, Validation and Application of Cloud-Radiation Parameterization in a GCM
Dr. Graeme L. Stephens
2009-04-30
The research performed under this award was conducted along 3 related fronts: (1) Refinement and assessment of parameterizations of sub-grid scale radiative transport in GCMs. (2) Diagnostic studies that use ARM observations of clouds and convection in an effort to understand the effects of moist convection on its environment, including how convection influences clouds and radiation. This aspect focuses on developing and testing methodologies designed to use ARM data more effectively for use in atmospheric models, both at the cloud resolving model scale and the global climate model scale. (3) Use (1) and (2) in combination with both models and observations of varying complexity to study key radiation feedback Our work toward these objectives thus involved three corresponding efforts. First, novel diagnostic techniques were developed and applied to ARM observations to understand and characterize the effects of moist convection on the dynamical and thermodynamical environment in which it occurs. Second, an in house GCM radiative transfer algorithm (BUGSrad) was employed along with an optimal estimation cloud retrieval algorithm to evaluate the ability to reproduce cloudy-sky radiative flux observations. Assessments using a range of GCMs with various moist convective parameterizations to evaluate the fidelity with which the parameterizations reproduce key observable features of the environment were also started in the final year of this award. The third study area involved the study of cloud radiation feedbacks and we examined these in both cloud resolving and global climate models.
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Unsupervised classification algorithm based on EM method for polarimetric SAR images
NASA Astrophysics Data System (ADS)
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Algorithmic improvements to the real-time implementation of a synthetic aperture sonar beam former
NASA Astrophysics Data System (ADS)
Freeman, Douglas K.
1997-07-01
Coastal Systems Station has translated its synthetic aperture sonar beamformer from linear processing to parallel processing. The initial implementation included many linear processes delegated to individual processors and neglected algorithmic refinements available to parallel processing. The steps taken to achieve increased computational speed for real-time beam forming are presented.
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation
Parallel scheduling algorithms
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Developmental Algorithms Have Meaning!
ERIC Educational Resources Information Center
Green, John
1997-01-01
Adapts Stanic and McKillip's ideas for the use of developmental algorithms to propose that the present emphasis on symbolic manipulation should be tempered with an emphasis on the conceptual understanding of the mathematics underlying the algorithm. Uses examples from the areas of numeric computation, algebraic manipulation, and equation solving…
Grain refinement during primary breakdown of alloy 718
Mataya, M.C.; Robinson, M.L.; Chang, D.; Weis, M.J.; Edwards, G.R.; Matlock, D.K.
1987-01-01
Grain refinement during primary breakdown of a production size alloy 718 ingot was investigated by compression testing of cylindrical specimens taken from various locations in the ingot cross section. Deformation was applied over a temperature range of 900 to 1150/sup 0/C, at strain rates from 0.1s/sup -1/ to 1.0s/sup -1/, and to strains of 0.25 to 1.0. Variations in grain morphology and orientation had a pronounced effect on the deformed sample geometry but little effect on recrystallization. Recrystallization was observed to nucleate at primary particles and high angle grain boundaries. Static recrystallization rather than dynamic recrystallization was the mechanism responsible for refinement of the coarse cast structure. The cycle time for one pass was identified as the criticl processing variable with respect to microstructural evolution for a set of deformation conditions typically used during breakdown by radial forging. 10 refs., 18 figs.
Chemical and physical aspects of refining coal liquids
NASA Astrophysics Data System (ADS)
Shah, Y. T.; Stiegel, G. J.; Krishnamurthy, S.
1981-02-01
Increasing costs and declining reserves of petroleum are forcing oil importing countries to develop alternate energy sources. The direct liquefaction of coal is currently being investigated as a viable means of producing substitute liquid fuels. The coal liquids derived from such processes are typically high in nitrogen, oxygen and sulfur besides having a high aromatic and metals content. It is therefore envisaged that modifications to existing petroleum refining technology will be necessary in order to economically upgrade coal liquids. In this review, compositional data for various coal liquids are presented and compared with those for petroleum fuels. Studies reported on the stability of coal liquids are discussed. The feasibility of processing blends of coal liquids with petroleum feedstocks in existing refineries is evaluated. The chemistry of hydroprocessing is discussed through kinetic and mechanistic studies using compounds which are commonly detected in coal liquids. The pros and cons of using conventional petroleum refining catalysts for upgrading coal liquids are discussed.
Element Distribution in Silicon Refining: Thermodynamic Model and Industrial Measurements
NASA Astrophysics Data System (ADS)
Næss, Mari K.; Kero, Ida; Tranell, Gabriella; Tang, Kai; Tveit, Halvard
2014-11-01
To establish an overview of impurity elemental distribution among silicon, slag, and gas/fume in the refining process of metallurgical grade silicon (MG-Si), an industrial measurement campaign was performed at the Elkem Salten MG-Si plant in Norway. Samples of in- and outgoing mass streams, i.e., tapped Si, flux and cooling materials, refined Si, slag, and fume, were analyzed by high-resolution inductively coupled plasma mass spectrometry (HR-ICP-MS), with respect to 62 elements. The elemental distributions were calculated and the experimental data compared with equilibrium estimations based on commercial and proprietary, published databases and carried out using the ChemSheet software. The results are discussed in terms of boiling temperatures, vapor pressures, redox potentials, and activities of the elements. These model calculations indicate a need for expanded databases with more and reliable thermodynamic data for trace elements in general and fume constituents in particular.
Facade model refinement by fusing terrestrial laser data and image
NASA Astrophysics Data System (ADS)
Liu, Yawen; Qin, Sushun
2015-12-01
The building facade model is one of main landscapes of a city and basic data of city geographic information. It is widely useful in accurate path planning, real navigation through the urban environment, location-based application, etc. In this paper, a method of facade model refinement by fusing terrestrial laser data and image is presented. It uses the matching of model edge and image line combined with laser data verification and effectively refines facade geometry model that reconstructed from laser data. The laser data of geometric structures on building facade such as window, balcony and door are segmented, and used as a constraint for further selecting the optical model edges that are located at the cross-line of point data and no data. The results demonstrate the deviation of model edges caused by laser sampling interval can be removed in the proposed method.
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Contactless heater floating zone refining and crystal growth
NASA Technical Reports Server (NTRS)
Kou, Sindo (Inventor); Lan, Chung-Wen (Inventor)
1993-01-01
Floating zone refining or crystal growth is carried out by providing rapid relative rotation of a feed rod and finish rod while providing heat to the junction between the two rods so that significant forced convection occurs in the melt zone between the two rods. The forced convection distributes heat in the melt zone to allow the rods to be melted through with a much shorter melt zone length than possible utilizing conventional floating zone processes. One of the rods can be rotated with respect to the other, or both rods can be counter-rotated, with typical relative rotational speeds of the rods ranging from 200 revolutions per minute (RPM) to 400 RPM or greater. Zone refining or crystal growth is carried out by traversing the melt zone through the feed rod.
Refining the Nasal Dorsum with Free Diced Cartilage.
Hoehne, Julius; Gubisch, Wolfgang; Kreutzer, Christian; Haack, Sebastian
2016-08-01
Refining the nasal dorsum has become a major challenge in modern rhinoplasty as irregularities of the nasal dorsum account for a significant number of revision surgeries. In our department, free diced cartilage is now routinely applied for smoothening of the nasal dorsum. In this retrospective study, the outcomes with regard to irregularities or contour deficits of the nasal dorsum of 431 rhinoplasty cases operated by a single surgeon between July 2013 and June 2015, using free diced cartilage, are compared with 327 cases operated by the same surgeon between January 2007 and December 2008, before the introduction of the free diced cartilage technique. A decrease in early revision surgeries (i.e., revision within the 2-year period evaluated) due to dorsal irregularities or contour deficits is seen. Being a quick, easy, and highly cost-effective procedure, we feel that free diced cartilage is currently the ideal technique for refinements of the nasal dorsum. PMID:27494578
Grain refinement of high strength steels to improve cryogenic toughness
NASA Technical Reports Server (NTRS)
Rush, H. F.
1985-01-01
Grain-refining techniques using multistep heat treatments to reduce the grain size of five commercial high-strength steels were investigated. The goal of this investigation was to improve the low-temperature toughness as measured by Charpy V-notch impact test without a significant loss in tensile strength. The grain size of four of five alloys investigated was successfully reduced up to 1/10 of original size or smaller with increases in Charpy impact energy of 50 to 180 percent at -320 F. Tensile properties were reduced from 0 to 25 percent for the various alloys tested. An unexpected but highly beneficial side effect from grain refining was improved machinability.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
A novel application of theory refinement to student modeling
Baffes, P.T.; Mooney, R.J.
1996-12-31
Theory refinement systems developed in machine learning automatically modify a knowledge base to render it consistent with a set of classified training examples. We illustrate a novel application of these techniques to the problem of constructing a student model for an intelligent tutoring system (ITS). Our approach is implemented in an ITS authoring system called ASSERT which uses theory refinement to introduce errors into an initially correct knowledge base so that it models incorrect student behavior. The efficacy of the approach has been demonstrated by evaluating a tutor developed with ASSERT with 75 students tested on a classification task covering concepts from an introductory course on the C{sup ++} programming language. The system produced reasonably accurate models and students who received feedback based on these models performed significantly better on a post test than students who received simple reteaching.
Venezuela's stake in US refining may grow: xenophobia addressed
Not Available
1987-09-23
Is this an invasion of U.S. oil industry sovereignty, or a happy marriage of upstream and downstream between US and foreign interests. Venezuela, a founding member of the Organization of Petroleum Exporting Countries who has also been a chief supplier to the US during times of peace and war, now owns half of two important US refining and marketing organizations. Many US marketers have felt uneasy about this foreign penetration of their turf. In this issue, for the sake of public information, the entire policy statement from the leader of that Venezuelan market strategy is provided. This issue also contains the following: (1) ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore as of late September, 1987; and (2) ED fuel price/tax series for countries of the Eastern Hemisphere, Sept. 19 edition. 4 figures, 6 tables.
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
JT9D ceramic outer air seal system refinement program
NASA Technical Reports Server (NTRS)
Gaffin, W. O.
1982-01-01
The abradability and durability characteristics of the plasma sprayed system were improved by refinement and optimization of the plasma spray process and the metal substrate design. The acceptability of the final seal system for engine testing was demonstrated by an extensive rig test program which included thermal shock tolerance, thermal gradient, thermal cycle, erosion, and abradability tests. An interim seal system design was also subjected to 2500 endurance test cycles in a JT9D-7 engine.
REFINING AND END USE STUDY OF COAL LIQUIDS
Unknown
2002-01-01
This document summarizes all of the work conducted as part of the Refining and End Use Study of Coal Liquids. There were several distinct objectives set, as the study developed over time: (1) Demonstration of a Refinery Accepting Coal Liquids; (2) Emissions Screening of Indirect Diesel; (3) Biomass Gasification F-T Modeling; and (4) Updated Gas to Liquids (GTL) Baseline Design/Economic Study.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Decadal climate prediction with a refined anomaly initialisation approach
NASA Astrophysics Data System (ADS)
Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.; Hawkins, Ed; Nichols, Nancy K.
2016-06-01
In decadal prediction, the objective is to exploit both the sources of predictability from the external radiative forcings and from the internal variability to provide the best possible climate information for the next decade. Predicting the climate system internal variability relies on initialising the climate model from observational estimates. We present a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following key innovations: (1) the use of a weight applied to the observed anomalies, in order to avoid the risk of introducing anomalies recorded in the observed climate, whose amplitude does not fit in the range of the internal variability generated by the model; (2) the AI of the ocean density, instead of calculating it from the anomaly initialised state of temperature and salinity. An experiment initialised with this refined AI method has been compared with a full field and standard AI experiment. Results show that the use of such refinements enhances the surface temperature skill over part of the North and South Atlantic, part of the South Pacific and the Mediterranean Sea for the first forecast year. However, part of such improvement is lost in the following forecast years. For the tropical Pacific surface temperature, the full field initialised experiment performs the best. The prediction of the Arctic sea-ice volume is improved by the refined AI method for the first three forecast years and the skill of the Atlantic multidecadal oscillation is significantly increased compared to a non-initialised forecast, along the whole forecast time.
A Precision Recursive Estimate for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B.
1980-01-01
A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.
Refinement of Atomic Structures Against cryo-EM Maps.
Murshudov, G N
2016-01-01
This review describes some of the methods for atomic structure refinement (fitting) against medium/high-resolution single-particle cryo-EM reconstructed maps. Some of the tools developed for macromolecular X-ray crystal structure analysis, especially those encapsulating prior chemical and structural information can be transferred directly for fitting into cryo-EM maps. However, despite the similarities, there are significant differences between data produced by these two techniques; therefore, different likelihood functions linking the data and model must be used in cryo-EM and crystallographic refinement. Although tools described in this review are mostly designed for medium/high-resolution maps, if maps have sufficiently good quality, then these tools can also be used at moderately low resolution, as shown in one example. In addition, the use of several popular crystallographic methods is strongly discouraged in cryo-EM refinement, such as 2Fo-Fc maps, solvent flattening, and feature-enhanced maps (FEMs) for visualization and model (re)building. Two problems in the cryo-EM field are overclaiming resolution and severe map oversharpening. Both of these should be avoided; if data of higher resolution than the signal are used, then overfitting of model parameters into the noise is unavoidable, and if maps are oversharpened, then at least parts of the maps might become very noisy and ultimately uninterpretable. Both of these may result in suboptimal and even misleading atomic models. PMID:27572731
Shading-based DEM refinement under a comprehensive imaging model
NASA Astrophysics Data System (ADS)
Peng, Jianwei; Zhang, Yi; Shan, Jie
2015-12-01
This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.
Tracking-refinement modeling for solar-collector control
Biggs, F.
1980-01-01
A closed-loop sun-tracking control used in conjunction with an open-loop system can utilize the unique features of both methods to obtain an improved sun-tracking capability. The open-loop part of the system uses a computer with clock and ephemeris input to acquire the sun at startup, to provide alignment during cloud passage, and to give an approximate sun-tracking capability throughout the day. The closed-loop portion of the system refines this alignment in order to maximize the collected solar power. For a parabolic trough that utilizes a tube along its focal line to collect energy in a fluid, a resistance wire attached to the tube can provide the sensor for the closed-loop part of the control. This kind of tracking refinement helps to compensate for such time-dependent effects as sag of the absorber tube and deformation of the concentrator surface from gravity or wind loading, temperature gradients, and manufacturing tolerances. A model is developed to explain the behavior of a resistance wire which is wrapped around the absorber tube of a parabolic-trough concentrator and used as a sensor in a tracking-refinement control.
Mesh refinement for uncertainty quantification through model reduction
NASA Astrophysics Data System (ADS)
Li, Jing; Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan-Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Evaluation of predictions in the CASP10 model refinement category
Nugent, Timothy; Cozzetto, Domenico; Jones, David T
2014-01-01
Here we report on the assessment results of the third experiment to evaluate the state of the art in protein model refinement, where participants were invited to improve the accuracy of initial protein models for 27 targets. Using an array of complementary evaluation measures, we find that five groups performed better than the naïve (null) method—a marked improvement over CASP9, although only three were significantly better. The leading groups also demonstrated the ability to consistently improve both backbone and side chain positioning, while other groups reliably enhanced other aspects of protein physicality. The top-ranked group succeeded in improving the backbone conformation in almost 90% of targets, suggesting a strategy that for the first time in CASP refinement is successful in a clear majority of cases. A number of issues remain unsolved: the majority of groups still fail to improve the quality of the starting models; even successful groups are only able to make modest improvements; and no prediction is more similar to the native structure than to the starting model. Successful refinement attempts also often go unrecognized, as suggested by the relatively larger improvements when predictions not submitted as model 1 are also considered. Proteins 2014; 82(Suppl 2):98–111. PMID:23900810
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
The oculoauriculovertebral spectrum: Refining the estimate of birth prevalence
Gabbett, Michael T.
2012-01-01
The oculoauriculovertebral spectrum (OAVS) is a well-described pattern of congenital malformations primarily characterized by hemifacial microsomia and/or auricular dysplasia. However, the birth prevalence of OAVS is poorly characterized. Figures ranging from 1 in 150,000 through to 1 in 5,600 can be found in the literature – the latter figure being the most frequently quoted. This study aims to evaluate the reasons behind such discrepant figures and to refine the estimated birth prevalence of OAVS. Published reports on the incidence and prevalence of OAVS were systematically sought after. This evidence was critically reviewed. Data from appropriate studies was amalgamated to refine the estimate of the birth prevalence for OAVS. Two main reasons were identified why birth prevalence figures for OAVS are so highly discrepant: differing methods of case ascertainment and the lack of a formal definition for OAVS. This study refines the estimate of birth prevalence for OAVS to between 1 in 40,000 and 1 in 30,000. This number needs to be confirmed in a large well-designed prospective study using a formally agreed-upon definition for OAVS.
The state of animal welfare in the context of refinement.
Zurlo, Joanne; Hutchinson, Eric
2014-01-01
The ultimate goal of the Three Rs is the full replacement of animals used in biomedical research and testing. However, replacement is unlikely to occur in the near future; therefore the scientific community as a whole must continue to devote considerable effort to ensure optimal animal welfare for the benefit of the science and the animals, i.e., the R of refinement. Laws governing the care and use of laboratory animals have recently been revised in Europe and the US and these place greater emphasis on promoting the well-being of the animals in addition to minimizing pain and distress. Social housing for social species is now the default condition, which can present a challenge in certain experimental settings and for certain species. The practice of positive reinforcement training of laboratory animals, particularly non-human primates, is gathering momentum but is not yet universally employed. Enhanced consideration of refinement extends to rodents, particularly mice, whose use is still increasing as more genetically modified models are generated. The wastage of extraneous mice and the method of their euthanasia are refinement issues that still need to be addressed. An international, concerted effort into defining the needs of laboratory animals is still necessary to improve the quality of the animal models used as well as their welfare. PMID:24448759
Decontamination of steel by melt refining: A literature review
Ozturk, B.; Fruehan, R.J.
1994-12-31
It has been reported that a large amount of metal waste is produced annually by nuclear fuel processing and nuclear power plants. These metal wastes are contaminated with radioactive elements, such as uranium and plutonium. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain level. Because of high cost, it is important to develop an effective decontamination and volume reduction method for low level contaminated metals. It has been shown by some investigators that a melt refining technique can be used for the processing of the contaminated metal wastes. In this process, contaminated metal is melted wit a suitable flux. The radioactive elements are oxidized and transferred to a slag phase. In order to develop a commercial process it is important to have information on the thermodynamics and kinetics of the removal. Therefore, a literature search was carried out to evaluate the available information on the decontamination uranium and transuranic-contaminated plain steel, copper and stainless steel by melt a refining technique. Emphasis was given to the thermodynamics and kinetics of the removal. Data published in the literature indicate that it is possible to reduce the concentration of radioactive elements to a very low level by the melt refining method. 20 refs.
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Locally-adaptive and memetic evolutionary pattern search algorithms.
Hart, William E
2003-01-01
Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096
A parallel sparse algorithm targeting arterial fluid mechanics computations
NASA Astrophysics Data System (ADS)
Manguoglu, Murat; Takizawa, Kenji; Sameh, Ahmed H.; Tezduyar, Tayfun E.
2011-09-01
Iterative solution of large sparse nonsymmetric linear equation systems is one of the numerical challenges in arterial fluid-structure interaction computations. This is because the fluid mechanics parts of the fluid + structure block of the equation system that needs to be solved at every nonlinear iteration of each time step corresponds to incompressible flow, the computational domains include slender parts, and accurate wall shear stress calculations require boundary layer mesh refinement near the arterial walls. We propose a hybrid parallel sparse algorithm, domain-decomposing parallel solver (DDPS), to address this challenge. As the test case, we use a fluid mechanics equation system generated by starting with an arterial shape and flow field coming from an FSI computation and performing two time steps of fluid mechanics computation with a prescribed arterial shape change, also coming from the FSI computation. We show how the DDPS algorithm performs in solving the equation system and demonstrate the scalability of the algorithm.
A flexible unstructured mesh generation algorithm suitable for block partitioning
Karamete, B.K.
1996-12-31
This paper describes the logic of a dynamic algorithm for an arbitrarily prescribed geometry. The generated meshes show Delaunay property both in 2D and 3D. The algorithm requires minimal surface information in 3D. The surface triangles appear as the direct consequence of interior tetrahedration. The adopted successive refinement scheme results in such a node distribution that it is not needed to check boundary conformity. Further computational saving is provided by using a special binary tree (ADT). The generating front can not be determined a-priori as opposed to the moving front techniques. This feature can effectively be used to partition the geometry into equal element sized blocks while generating the mesh for parallel computing purposes. The algorithm shows flexibility to split the geometry into blocks at mesh generation time.
A Deterministic Approximation Algorithm for Maximum 2-Path Packing
NASA Astrophysics Data System (ADS)
Tanahashi, Ruka; Chen, Zhi-Zhong
This paper deals with the maximum-weight 2-path packing problem (M2PP), which is the problem of computing a set of vertex-disjoint paths of length 2 in a given edge-weighted complete graph so that the total weight of edges in the paths is maximized. Previously, Hassin and Rubinstein gave a randomized cubic-time approximation algorithm for M2PP which achieves an expected ratio of 35/67 - ε ≈ 0.5223 - ε for any constant ε > 0. We refine their algorithm and derandomize it to obtain a deterministic cubic-time approximation algorithm for the problem which achieves a better ratio (namely, 0.5265 - ε for any constant ε > 0).
A new adaptive mesh refinement data structure with an application to detonation
NASA Astrophysics Data System (ADS)
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Shugen; Liu, Xiuguo
2016-06-01
Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
The birth and growth of the Grozny petroleum refining and petrochemical industry
Dorogochinskii, A.Z.
1994-07-01
The first oil gushers were struck in Grozny in 1893, the year that marks the start of rapid development of the Grozny petroleum refining industry. This report describes the operation and growth of the refining industry.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Parallel algorithms and architectures
Albrecht, A.; Jung, H.; Mehlhorn, K.
1987-01-01
Contents of this book are the following: Preparata: Deterministic simulation of idealized parallel computers on more realistic ones; Convex hull of randomly chosen points from a polytope; Dataflow computing; Parallel in sequence; Towards the architecture of an elementary cortical processor; Parallel algorithms and static analysis of parallel programs; Parallel processing of combinatorial search; Communications; An O(nlogn) cost parallel algorithms for the single function coarsest partition problem; Systolic algorithms for computing the visibility polygon and triangulation of a polygonal region; and RELACS - A recursive layout computing system. Parallel linear conflict-free subtree access.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, or retrieving relevant data for the user.
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Totally parallel multilevel algorithms for sparse elliptic systems
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1989-01-01
The fastest known algorithms for the solution of a large elliptic boundary value problem on a massively parallel hypercube all require O(log(n)) floating point operations and O(log(n)) distance-1 communications, if massively parallel is defined to mean a number of processors proportional to the size n of the problem. The Totally Parallel Multilevel Algorithm (TPMA) that has, as special cases, four of these fast algorithms is described. These four algorithms are Parallel Superconvergent Multigrid (PSMG), Robust Multigrid, the Fast Fourier Transformation (FFT) based Spectral Algorithm, and Parallel Cyclic Reduction. The algorithm TPMA, when described recursively, has four steps: (1) project to a collection of interlaced, coarser problems at the next lower level; (2) apply TPMA, recursively, to each of these lower level problems, solving directly at the lowest level; (3) interpolate these approximate solutions to the finer grid, and to verage them to form an approximate solution on this grid; and (4) refine this approximate solution with a defect-correction step, using a local approximate inverse. Choice of the projection operator (P), the interpolation operator (Q), and the smoother (S) determines the class of problems on which TPMA is most effective. There are special cases in which the first three steps produce an exact solution, and the smoother is not needed (e.g., constant coefficient operators).
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
A Task-parallel Clustering Algorithm for Structured AMR
Gunney, B N; Wissink, A M
2004-11-02
A new parallel algorithm, based on the Berger-Rigoutsos algorithm for clustering grid points into logically rectangular regions, is presented. The clustering operation is frequently performed in the dynamic gridding steps of structured adaptive mesh refinement (SAMR) calculations. A previous study revealed that although the cost of clustering is generally insignificant for smaller problems run on relatively few processors, the algorithm scaled inefficiently in parallel and its cost grows with problem size. Hence, it can become significant for large scale problems run on very large parallel machines, such as the new BlueGene system (which has {Omicron}(10{sup 4}) processors). We propose a new task-parallel algorithm designed to reduce communication wait times. Performance was assessed using dynamic SAMR re-gridding operations on up to 16K processors of currently available computers at Lawrence Livermore National Laboratory. The new algorithm was shown to be up to an order of magnitude faster than the baseline algorithm and had better scaling trends.
A Simple Calculator Algorithm.
ERIC Educational Resources Information Center
Cook, Lyle; McWilliam, James
1983-01-01
The problem of finding cube roots when limited to a calculator with only square root capability is discussed. An algorithm is demonstrated and explained which should always produce a good approximation within a few iterations. (MP)
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
NASA Astrophysics Data System (ADS)
Feigin, G.; Ben-Yosef, N.
1983-10-01
A thinning algorithm, of the banana-peel type, is presented. In each iteration pixels are attacked from all directions (there are no sub-iterations), and the deletion criteria depend on the 24 nearest neighbours.
Diagnostic Algorithm Benchmarking
NASA Technical Reports Server (NTRS)
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Algorithmically specialized parallel computers
Snyder, L.; Jamieson, L.H.; Gannon, D.B.; Siegel, H.J.
1985-01-01
This book is based on a workshop which dealt with array processors. Topics considered include algorithmic specialization using VLSI, innovative architectures, signal processing, speech recognition, image processing, specialized architectures for numerical computations, and general-purpose computers.
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2014 CFR
2014-07-01
... joint venture partners. (iii) The refiner had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... government employees. (vi) The total corporate crude oil capacity of each refinery as reported to the...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2013 CFR
2013-07-01
... joint venture partners. (iii) The refiner had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... government employees. (vi) The total corporate crude oil capacity of each refinery as reported to the...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the provisions for small... Standard § 80.1442 What are the provisions for small refiners under the RFS program? (a)(1) To qualify as...
40 CFR 80.1142 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the provisions for small... Standard § 80.1142 What are the provisions for small refiners under the RFS program? (a)(1) Gasoline... refiner or foreign refiner does not meet the definition of a small refinery under § 80.1101(g) but...
40 CFR 80.1142 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the provisions for small... Standard § 80.1142 What are the provisions for small refiners under the RFS program? (a)(1) Gasoline... refiner or foreign refiner does not meet the definition of a small refinery under § 80.1101(g) but...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the provisions for small... Standard § 80.1442 What are the provisions for small refiners under the RFS program? (a)(1) To qualify as...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-25
... been given in the Federal Register (76 FR 10329, 2-24-2011) and the application has been processed... Foreign-Trade Zones Board Grant of Authority for Subzone Status, Valero Refining Company-- California... special-purpose subzone at the oil refining facilities of Valero Refining Company--California, located...
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply to... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of...
78 FR 25415 - Waivers Under the Refined Sugar Re-Export Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... 30, 2013. (2) USDA is temporarily increasing the license limit for raw cane sugar refiners from 50... Office of the Secretary Waivers Under the Refined Sugar Re-Export Program AGENCY: Office of the Secretary... waiving certain provisions in the Refined Sugar Re-Export Program, effective today. These actions...
40 CFR 409.20 - Applicability; description of the crystalline cane sugar refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... crystalline cane sugar refining subcategory. 409.20 Section 409.20 Protection of Environment ENVIRONMENTAL... Crystalline Cane Sugar Refining Subcategory § 409.20 Applicability; description of the crystalline cane sugar... processing of raw cane sugar into crystalline refined sugar....
40 CFR 80.1343 - What hardship relief provisions are available only to small refiners?
Code of Federal Regulations, 2010 CFR
2010-07-01
... available only to small refiners? 80.1343 Section 80.1343 Protection of Environment ENVIRONMENTAL PROTECTION... Refiner Provisions § 80.1343 What hardship relief provisions are available only to small refiners? (a)(1... § 80.1230(a) would be feasible only through the purchase of credits, but for whom purchase of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-28
... From the Petroleum Refining Industry Processed in a Gasification System To Produce Synthesis Gas..., ``Regulation of Oil-Bearing ] Hazardous Secondary Materials from the Petroleum Refining Industry Processed in a... Refining Industry Processed in a Gasification System to Produce Synthesis Gas'' (Gasification Rule)....
NASA Astrophysics Data System (ADS)
Watanabe, Yoshimi; Hamada, Takayuki; Sato, Hisashi
2016-01-01
Grain refinement plays a vital role in cast and wrought aluminum alloys. The grain refiners introduce particles that heterogeneously nucleate the primary alpha-aluminum. It is well known that Al3Ti particles are commonly used as heterogeneous nucleants for aluminum alloy casting. If a substance with a smaller misfit for aluminum is used as a heterogeneous nucleant, such as L12 intermetallic compounds, it should be possible to achieve a better grain refining performance. In this study, the optimum condition for a grain refiner using Al2.7Fe0.3Ti intermetallic compound particles with the L12 structure was investigated.
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Highly Scalable Matching Pursuit Signal Decomposition Algorithm
NASA Technical Reports Server (NTRS)
Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.
2009-01-01
Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
Comparison of local grid refinement methods for MODFLOW
Mehl, S.; Hill, M.C.; Leake, S.A.
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).
Optical CD metrology model evaluation and refining for manufacturing
NASA Astrophysics Data System (ADS)
Wang, S.-B.; Huang, C. L.; Chiu, Y. H.; Tao, H. J.; Mii, Y. J.
2009-03-01
Optical critical dimension (OCD) metrology has been well-accepted as standard inline metrology tool in semiconductor manufacturing since 65nm technology node for its un-destructive and versatile advantage. Many geometry parameters can be obtained in a single measurement with good accuracy if model is well established and calibrated by transmission electron microscopy (TEM). However, in the viewpoint of manufacturing, there is no effective index for model quality and, based on that, for model refining. Even, when device structure becomes more complicated, like strained silicon technology, there are more parameters required to be determined in the afterward measurement. The model, therefore, requires more attention to be paid to ensure inline metrology reliability. GOF (goodness-of-fitting), one model index given by a commercial OCD metrology tool, for example, is not sensitive enough while correlation and sensitivity coefficient, the other two indexes, are evaluated under metrology tool noise only and not directly related to inline production measurement uncertainty. In this article, we will propose a sensitivity matrix for measurement uncertainty estimation in which each entry is defined as the correlation coefficient between the corresponding two floating parameters and obtained by linearization theorem. The uncertainty is estimated in combination of production line variation and found, for the first time, much larger than that by metrology tool noise alone that indicates model quality control is critical for nanometer device production control. The uncertainty, in comparison with production requirement, also serves as index for model refining either by grid size rescaling or structure model modification. This method is verified by TEM measurement and, in the final, a flow chart for model refining is proposed.
Environmental control technology for mining, milling, and refining thorium
Weakley, S.A.; Blahnik, D.E.; Young, J.K.; Bloomster, C.H.
1980-02-01
The purpose of this report is to evaluate, in terms of cost and effectiveness, the various environmental control technologies that would be used to control the radioactive wastes generated in the mining, milling, and refining of thorium from domestic resources. The technologies, in order to be considered for study, had to reduce the radioactivity in the waste streams to meet Atomic Energy Commission (10 CFR 20) standards for natural thorium's maximum permissible concentration (MPC) in air and water. Further regulatory standards or licensing requirements, either federal, state, or local, were not examined. The availability and cost of producing thorium from domestic resources is addressed in a companion volume. The objectives of this study were: (1) to identify the major waste streams generated during the mining, milling, and refining of reactor-grade thorium oxide from domestic resources; and (2) to determine the cost and levels of control of existing and advanced environmental control technologies for these waste streams. Six potential domestic deposits of thorium oxide, in addition to stockpiled thorium sludges, are discussed in this report. A summary of the location and characteristics of the potential domestic thorium resources and the mining, milling, and refining processes that will be needed to produce reactor-grade thorium oxide is presented in Section 2. The wastes from existing and potential domestic thorium oxide mines, mills, and refineries are identified in Section 3. Section 3 also presents the state-of-the-art technology and the costs associated with controlling the wastes from the mines, mills, and refineries. In Section 4, the available environmental control technologies for mines, mills, and refineries are assessed. Section 5 presents the cost and effectiveness estimates for the various environmental control technologies applicable to the mine, mill, and refinery for each domestic resource.
Growth of CZT using additionally zone-refined raw materials
NASA Astrophysics Data System (ADS)
Knuteson, David J.; Berghmans, Andre; Kahler, David; Wagner, Brian; King, Matthew; Mclaughlin, Sean; Bolotnikov, Aleksey; James, Ralph; Singh, Narsingh B.
2012-10-01
Results will be presented for the growth of CdZnTe by the low pressure Bridgman growth technique. To decrease deeplevel trapping and improve detector performance, high purity commercial raw materials will be further zone refined to reduce impurities. The purified materials will then be compounded into a charge for crystal growth. The crystals will be grown in the programmable multi-zone furnace (PMZF), which was designed and built at Northrop Grumman's Bethpage facility to grow CZT on Space Shuttle missions. Results of the purification and crystal growth will be presented as well as characterization of crystal quality and detector performance.
Refinement of Phobos Ephemeris Using Mars Orbiter Laser Altimeter Radiometry
NASA Technical Reports Server (NTRS)
Neumann, G. A.; Bills, B. G.; Smith, D. E.; Zuber, M. T.
2004-01-01
Radiometric observations from the Mars Orbiter Laser Altimeter (MOLA) can be used to improve the ephemeris of Phobos, with particular interest in refining estimates of the secular acceleration due to tidal dissipation within Mars. We have searched the Mars Orbiter Laser Altimeter (MOLA) radiometry data for shadows cast by the moon Phobos, finding 7 such profiles during the Mapping and Extended Mission phases, and 5 during the last two years of radiometry operations. Preliminary data suggest that the motion of Phobos has advanced by one or more seconds beyond that predicted by the current ephemerides, and the advance has increased over the 5 years of Mars Global Surveyor (MGS) operations.
Crystal chemistry and structure refinement of five hydrated calcium borates
Clark, J.R.; Appleman, D.E.; Christ, C.L.
1964-01-01
The crystal structures of the five known members of the series Ca2B6O11??xH2O (x = 1, 5, 5, 7, 9, and 13) have been refined by full-matrix least-squares techniques, yielding bond distances and angles with standard errors of less than 0??01 A?? and 0??5??, respectively. The results illustrate the crystal chemical principles that govern the structures of hydrated borate compounds. The importance of hydrogen bonding in the ferroelectric transition of colemanite is confirmed by more accurate proton assignments. ?? 1964.