Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Algorithm refinement for stochastic partial differential equations.
Alexander, F. J.; Garcia, Alejandro L.,; Tartakovsky, D. M.
2001-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. A variety of numerical experiments were performed for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except within the particle region, far from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
An adaptive mesh refinement algorithm for the discrete ordinates method
Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.
1996-03-01
The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that
Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes
Parsons, I D; Solberg, J M
2006-02-03
This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.
Edgel, Jared; Benzley, Steven E.; Owen, Steven James
2010-08-01
Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.
MISR research-aerosol-algorithm refinements for dark water retrievals
NASA Astrophysics Data System (ADS)
Limbacher, J. A.; Kahn, R. A.
2014-11-01
We explore systematically the cumulative effect of many assumptions made in the Multi-angle Imaging SpectroRadiometer (MISR) research aerosol retrieval algorithm with the aim of quantifying the main sources of uncertainty over ocean, and correcting them to the extent possible. A total of 1129 coincident, surface-based sun photometer spectral aerosol optical depth (AOD) measurements are used for validation. Based on comparisons between these data and our baseline case (similar to the MISR standard algorithm, but without the "modified linear mixing" approximation), for 558 nm AOD < 0.10, a high bias of 0.024 is reduced by about one-third when (1) ocean surface under-light is included and the assumed whitecap reflectance at 672 nm is increased, (2) physically based adjustments in particle microphysical properties and mixtures are made, (3) an adaptive pixel selection method is used, (4) spectral reflectance uncertainty is estimated from vicarious calibration, and (5) minor radiometric calibration changes are made for the 672 and 866 nm channels. Applying (6) more stringent cloud screening (setting the maximum fraction not-clear to 0.50) brings all median spectral biases to about 0.01. When all adjustments except more stringent cloud screening are applied, and a modified acceptance criterion is used, the Root-Mean-Square-Error (RMSE) decreases for all wavelengths by 8-27% for the research algorithm relative to the baseline, and is 12-36% lower than the RMSE for the Version 22 MISR standard algorithm (SA, with no adjustments applied). At 558 nm, 87% of AOD data falls within the greater of 0.05 or 20% of validation values; 62% of the 446 nm AOD data, and > 68% of 558, 672, and 866 nm AOD values fall within the greater of 0.03 or 10%. For the Ångström exponent (ANG), 67% of 1119 validation cases for AOD > 0.01 fall within 0.275 of the sun photometer values, compared to 49% for the SA. ANG RMSE decreases by 17% compared to the SA, and the median absolute error drops by
A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images
NASA Astrophysics Data System (ADS)
Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong
2010-10-01
Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
Refinement of Optical Imaging Spectroscopy Algorithms using concurrent BOLD and CBV fMRI
Kennerley, Aneurin J; Berwick, Jason; Martindale, John; Johnston, David; Zheng, Ying; Mayhew, John E
2009-01-01
We describe the use of the three dimensional characteristics of the functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) and cerebral blood volume (CBV) MRI signal changes to refine a two dimensional optical imaging spectroscopy (OIS) algorithm. The cortical depth profiles of the BOLD and CBV changes following neural activation were used to parameterise a 5-layer heterogeneous tissue model used in the Monte Carlo simulations (MCS) of light transport through tissue in the OIS analysis algorithm. To transform the fMRI BOLD and CBV measurements into deoxy-haemoglobin (Hbr) profiles we inverted an MCS of extravascular MR signal attenuation under the assumption that the extra-/intravascular ratio is 2:1 at a magnetic field strength of 3T. The significant improvement in the quantitative accuracy of haemodynamic measurements using the new heterogeneous tissue model over the original homogeneous tissue model OIS algorithm was demonstrated on new concurrent OIS and fMRI data covering a range of stimulus durations. PMID:19505581
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated. Catalogue identifier: AFAU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 94964 No. of bytes in distributed program, including test data, etc.: 6242986 Distribution format: tar.gz Programming language: Fortran 90, MPI. (Requires an Intel compiler). Computer: Workstations
NASA Astrophysics Data System (ADS)
Guo, Z.; Xiong, S. M.
2015-05-01
An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.
A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics
Puckett, E.G. ); Saltzman, J.S. )
1991-08-12
Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.
NASA Astrophysics Data System (ADS)
Suvorov, A. S.; Sokov, E. M.; V'yushkina, I. A.
2016-09-01
A new method is presented for the automatic refinement of finite element models of complex mechanical-acoustic systems using the results of experimental studies. The method is based on control of the spectral characteristics via selection of the optimal distribution of adjustments to the stiffness of a finite element mesh. The results of testing the method are given to show the possibility of its use to significantly increase the simulation accuracy of vibration characteristics of bodies with arbitrary spatial configuration.
Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science
Egger, Jan
2014-01-01
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
A node-centered local refinement algorithm for poisson's equation in complex geometries
McCorquodale, Peter; Colella, Phillip; Grote, David P.; Vay, Jean-Luc
2004-05-04
This paper presents a method for solving Poisson's equation with Dirichlet boundary conditions on an irregular bounded three-dimensional region. The method uses a nodal-point discretization and adaptive mesh refinement (AMR) on Cartesian grids, and the AMR multigrid solver of Almgren. The discrete Laplacian operator at internal boundaries comes from either linear or quadratic (Shortley-Weller) extrapolation, and the two methods are compared. It is shown that either way, solution error is second order in the mesh spacing. Error in the gradient of the solution is first order with linear extrapolation, but second order with Shortley-Weller. Examples are given with comparison with the exact solution. The method is also applied to a heavy-ion fusion accelerator problem, showing the advantage of adaptivity.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Refined algorithms for star-based monitoring of GOES Imager visible-channel responsivities
NASA Astrophysics Data System (ADS)
Chang, I.-Lok; Dean, Charles; Li, Zhenping; Weinreb, Michael; Wu, Xiangqian; Swamy, P. A. V. B.
2012-09-01
Monitoring the responsivities of the visible channels of the Imagers on GOES satellites is a continuing effort at the National Environmental Satellite, Data and Information Service of NOAA. At this point, a large part of the initial processing of the star data depends on the operationalGOES Sensor Processing System(SPS) and GOES Orbit and AttitudeTracking System (OATS) for detecting the presence of stars and computing the amplitudes of the star signals. However, the algorithms of the SPS and the OATS are not optimized for calculating the amplitudes of the star signals, as they had been developed to determine pixel location and observation time of a star, not amplitude. Motivated by our wish to be independent of the SPS and the OATS for data processing and to improve the accuracy of the computed star signals, we have developed our own methods for such computations. We describe the principal algorithms and discuss their implementation. Next we show our monitoring statistics derived from star observations by the Imagers aboard GOES-8, -10, -11, -12 and -13. We give a brief introduction to a new class of time series that have improved the stability and reliability of our degradation estimates.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
An updated partiality model and post-refinement algorithm for XFEL snapshot diffraction data is presented and confirmed by observing anomalous density for S atoms at an X-ray wavelength of 1.3 Å. Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R{sub split} value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will
Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias
2016-10-01
Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system. PMID:27651280
Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias
2016-10-01
Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.
Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper
ERIC Educational Resources Information Center
Pérez-Leroux, Ana T.
2014-01-01
In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…
Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper
ERIC Educational Resources Information Center
Slabakova, Roumyana
2014-01-01
This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…
Shan, Hong; Wang, Zihao; Zhang, Fa; Xiong, Yong; Yin, Chang-Cheng; Sun, Fei
2016-01-01
Single particle analysis, which can be regarded as an average of signals from thousands or even millions of particle projections, is an efficient method to study the three-dimensional structures of biological macromolecules. An intrinsic assumption in single particle analysis is that all the analyzed particles must have identical composition and conformation. Thus specimen heterogeneity in either composition or conformation has raised great challenges for high-resolution analysis. For particles with multiple conformations, inaccurate alignments and orientation parameters will yield an averaged map with diminished resolution and smeared density. Besides extensive classification approaches, here based on the assumption that the macromolecular complex is made up of multiple rigid modules whose relative orientations and positions are in slight fluctuation around equilibriums, we propose a new method called as local optimization refinement to address this conformational heterogeneity for an improved resolution. The key idea is to optimize the orientation and shift parameters of each rigid module and then reconstruct their three-dimensional structures individually. Using simulated data of 80S/70S ribosomes with relative fluctuations between the large (60S/50S) and the small (40S/30S) subunits, we tested this algorithm and found that the resolutions of both subunits are significantly improved. Our method provides a proof-of-principle solution for high-resolution single particle analysis of macromolecular complexes with dynamic conformations.
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definitionmore » of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating theRsplitvalue) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases.« less
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
Adaptive mesh refinement techniques for electrical impedance tomography.
Molinari, M; Cox, S J; Blott, B H; Daniell, G J
2001-02-01
Adaptive mesh refinement techniques can be applied to increase the efficiency of electrical impedance tomography reconstruction algorithms by reducing computational and storage cost as well as providing problem-dependent solution structures. A self-adaptive refinement algorithm based on an a posteriori error estimate has been developed and its results are shown in comparison with uniform mesh refinement for a simple head model.
NASA Astrophysics Data System (ADS)
Frey, F. A.; Walker, N.; Stakes, D.; Hart, S. R.; Nielsen, R.
1993-03-01
The axial valley of the Mid-Atlantic Ridge from 36° to 37°N was intensively sampled by submersible during the FAMOUS and AMAR projects. Our research focussed on the compositional and isotopic characteristics of basaltic glasses from the AMAR valley and the NARROWGATE region of the FAMOUS valley. These basaltic glasses are characterized by: (1) major element abundance trends that are consistent with control by multiphase fractionation (olivine, plagioclase and clinopyroxene) and magma mixing, (2) near isotopic homogeneity δ 18O= 5.2to6.4 , 87Sr/ 86Sr= 0.70288to0.70299 and 206Pb/ 204Pb= 18.57to18.84 , and (3) a wide range of incompatible element abundance ratios; e.g., within the AMAR valley chondrite-normalized La/Sm ranges from 0.7 to 1.5 and La/Yb from 0.6 to 1.6. These ratios increase with decreasing MgO content. Because of the limited variations in isotopic ratios of Sr, Nd and Pb, it is plausible that these compositional variations reflect post-melting magmatic processes. However, it is not possible to explain the correlated variation in MgO content and incompatible element abundance ratios, such as La/Sm and Zr/Nb, by fractional crystallization or more complex processes such as boundary layer fractionation. These geochemical trends can be explained by mixing of parental magmas that formed by very different extents of melting. In particular, the factor of three variation in Ce content in samples with ˜ 2.1% Na 2O and 8% MgO requires a component derived by < 1% melting. If the large variations in abundance ratios of incompatible elements reflect the melting process, a large, long-lived magma chamber was not present during eruption of these AMAR lavas. The geological characteristics of the AMAR valley and the compositions of AMAR lavas are consistent with episodic volcanism; i.e., periods of magma eruption were followed by extensive periods of tectonism with little or no magmatism.
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Russo, Giulia
2016-07-01
Ben Amar and Bianca valuably reviewed the state of the art of fibrosis modeling approach scenario [1]. Each paragraph identifies and examines a specific theoretical tool according to their scale level (molecular, cellular or tissue). For each of them it is shown the area of application, along with a clear description of strong and weak points. This critical analysis denotes the necessity to develop a more suitable and original multiscale approach in the future [2].
NASA Astrophysics Data System (ADS)
Reba, M. L.; Marks, D.; Link, T.; Pomeroy, J.; Winstral, A.
2007-12-01
Energy balance models use physically based principles to simulate snow cover accumulation and melt. Snobal, a snow cover energy balance model, uses a flux-profile approach to calculating the turbulent flux (sensible and latent heat flux) components of the energy balance. Historically, validation data for turbulent flux simulations have been difficult to obtain at snow dominated sites characterized by complex terrain and heterogeneous vegetation. Currently, eddy covariance (EC) is the most defensible method available to measure turbulent flux and hence to validate this component of an energy balance model. EC was used to measure sensible and latent heat flux at two sites over three winter seasons (2004, 2005, and 2006). Both sites are located in Reynolds Creek Experimental Watershed in southwestern Idaho, USA and are characterized as semi-arid rangeland. One site is on a wind-exposed ridge with small shrubs and the other is in a wind-protected area in a small aspen stand. EC data were post processed from 10 Hz measurements. The first objective of this work was to compare EC- measured sensible and latent heat flux and sublimation/condensation to Snobal-simulated values. Comparisons were made on several temporal scales, including inter-annual, seasonal and diurnal. The flux- profile method used in Snobal assumes equal roughness lengths for moisture and temperature, and roughness lengths are constant and not a function of stability. Furthermore, there has been extensive work on improving profile function constants that is not considered in the current version of Snobal. Therefore, the second objective of this work was to modify the turbulent flux algorithm in Snobal. Modifications were made to calculate roughness lengths as a function of stability and separately for moisture and temperature. Also, more recent formulations of the profile function constants were incorporated. The third objective was to compare EC-measured sensible and latent heat flux and sublimation
Bachevalier, Jocelyne; Málková, Ludise
2006-08-01
Nonhuman primate studies, using selective amygdala lesions that spare cortical areas and fibers of passage, have helped to clarify the amygdala's specific contribution to social and emotional behavior. M. D. Bauman, J. E. Toscano, W. A. Mason, P. Lavenex, and D. G. Amaral (2006) reported that macaque monkeys (Macaca mulatta) with neonatal neurotoxic amygdala lesions displayed lower rank in social dominance status, reduced aggressive gestures, and enhanced fearful reactions to social cues compared with normal controls and those with neonatal hippocampal lesions when tested as juveniles in a group of peers. These results are discussed in light of a recent study (C. J. Machado & J. Bachevalier, 2006) showing that the same selective amygdala damage in adolescent monkeys did not alter presurgical social dominance status. This variability in behavioral changes after selective amygdala lesions underscores the significant interplay between timing of the lesion, genetic traits, and environmental factors and suggests that the amygdala is not the generator of specific emotional responses, but acts as a modulator to ensure that emotional responses are appropriate to the external stimuli and social context.
Adaptive mesh refinement for storm surge
NASA Astrophysics Data System (ADS)
Mandli, Kyle T.; Dawson, Clint N.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GEOCLAW framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.
Borole, Abhijeet P; Ramirez-Corredores, M. M.
2007-01-01
Biocatalysis in Oil Refining focuses on petroleum refining bioprocesses, establishing a connection between science and technology. The micro organisms and biomolecules examined for biocatalytic purposes for oil refining processes are thoroughly detailed. Terminology used by biologists, chemists and engineers is brought into a common language, aiding the understanding of complex biological-chemical-engineering issues. Problems to be addressed by the future R&D activities and by new technologies are described and summarized in the last chapter.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Recent developments in phasing and structure refinement for macromolecular crystallography
Adams, Paul D.; Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Read, Randy J.; Richardson, Jane S.; Richardson, David C.; Terwilliger, Thomas C.
2009-01-01
Summary Central to crystallographic structure solution is obtaining accurate phases in order to build a molecular model, ultimately followed by refinement of that model to optimize its fit to the experimental diffraction data and prior chemical knowledge. Recent advances in phasing and model refinement and validation algorithms make it possible to arrive at better electron density maps and more accurate models. PMID:19700309
Adaptive mesh refinement techniques for 3-D skin electrode modeling.
Sawicki, Bartosz; Okoniewski, Michal
2010-03-01
In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process.
A multiblock/multilevel mesh refinement procedure for CFD computations
NASA Astrophysics Data System (ADS)
Teigland, Rune; Eliassen, Inge K.
2001-07-01
A multiblock/multilevel algorithm with local refinement for general two- and three-dimensional fluid flow is presented. The patched-based local refinement procedure is presented in detail and algorithmic implementations are also presented. The multiblock implementation is essentially block-unstructured, i.e. each block having its own local curvilinear co-ordinate system. Refined grid patches can be put anywhere in the computational domain and can extend across block boundaries. To simplify the implementation, while still maintaining sufficient generality, the refinement is restricted to a refinement of the grid successively halving the grid size within a selected patch. The multiblock approach is implemented within the framework of the well-known SIMPLE solution strategy. Computational experiments showing the effect of using the multilevel solution procedure are presented for a sample elliptic problem and a few benchmark problems of computational fluid dynamics (CFD). Copyright
NASA Technical Reports Server (NTRS)
Flemings, M. C.; Szekely, J.
1982-01-01
The relationship between fluid flow phenomena, nucleation, and grain refinement in solidifying metals both in the presence and in the absence of a gravitational field was investigated. The reduction of grain size in hard-to-process melts; the effects of undercooling on structure in solidification processes, including rapid solidification processing; and control of this undercooling to improve structures of solidified melts are considered. Grain refining and supercooling thermal modeling of the solidification process, and heat and fluid flow phenomena in the levitated metal droplets are described.
NASA Astrophysics Data System (ADS)
Napoli, Gaetano
2016-07-01
The term fibrosis refers to the development of fibrous connective tissue, in an organ or in a tissue, as a reparative response to injury or damage. The review article by Ben Amar and Bianca [1] proposes a unified multiscale approach for the modeling of fibrosis, accounting for phenomena occurring at different spatial scales (molecular, cellular and macroscopic). The main aim is to define a general unified framework able to describe the mechanisms, not yet completely understood, that trigger physiological and pathological fibrosis.
NASA Technical Reports Server (NTRS)
Griner, D. B.
1986-01-01
System developed for studying use of laser beam for zone-refining semiconductors and metals. Specimen scanned with focused CO2 laser beam in such way that thin zone of molten material moves along specimen sweeps impurities with it. Zone-melting system comprises microcomputer, laser, electromechanical and optical components for beam control, vacuum chamber that holds specimen, and sensor for determining specimen temperature.
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
Parallel tetrahedral mesh refinement with MOAB.
Thompson, David C.; Pebay, Philippe Pierre
2008-12-01
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
Issues in adaptive mesh refinement
Dai, William Wenlong
2009-01-01
In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.
Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-09-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
Worldwide refining and gas processing directory
1999-11-01
Statistics are presented on the following: US refining; Canada refining; Europe refining; Africa refining; Asia refining; Latin American refining; Middle East refining; catalyst manufacturers; consulting firms; engineering and construction; US gas processing; international gas processing; plant maintenance providers; process control and simulation systems; and trade associations.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Improved Crystallographic Structures using Extensive Combinatorial Refinement
Nwachukwu, Jerome C.; Southern, Mark R.; Kiefer, James R.; Afonine, Pavel V.; Adams, Paul D.; Terwilliger, Thomas C.; Nettles, Kendall W.
2013-01-01
Summary Identifying errors and alternate conformers, and modeling multiple main-chain conformers in poorly ordered regions are overarching problems in crystallographic structure determination that have limited automation efforts and structure quality. Here, we show that implementation of a full factorial designed set of standard refinement approaches, which we call ExCoR (Extensive Combinatorial Refinement), significantly improves structural models compared to the traditional linear tree approach, in which individual algorithms are tested linearly, and only incorporated if the model improves. ExCoR markedly improved maps and models, and reveals building errors and alternate conformations that were masked by traditional refinement approaches. Surprisingly, an individual algorithm that renders a model worse in isolation could still be necessary to produce the best overall model, suggesting that model distortion allows escape from local minima of optimization target function, here shown to be a hallmark limitation of the traditional approach. ExCoR thus provides a simple approach to improving structure determination. PMID:24076406
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
Minimally refined biomass fuel
Pearson, Richard K.; Hirschfeld, Tomas B.
1984-01-01
A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
Model Checking Linearizability via Refinement
NASA Astrophysics Data System (ADS)
Liu, Yang; Chen, Wei; Liu, Yanhong A.; Sun, Jun
Linearizability is an important correctness criterion for implementations of concurrent objects. Automatic checking of linearizability is challenging because it requires checking that 1) all executions of concurrent operations be serializable, and 2) the serialized executions be correct with respect to the sequential semantics. This paper describes a new method to automatically check linearizability based on refinement relations from abstract specifications to concrete implementations. Our method avoids the often difficult task of determining linearization points in implementations, but can also take advantage of linearization points if they are given. The method exploits model checking of finite state systems specified as concurrent processes with shared variables. Partial order reduction is used to effectively reduce the search space. The approach is built into a toolset that supports a rich set of concurrent operators. The tool has been used to automatically check a variety of implementations of concurrent objects, including the first algorithms for the mailbox problem and scalable NonZero indicators. Our system was able to find all known and injected bugs in these implementations.
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
Capelli, Silvia C.; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-01-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly–l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree–Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints – even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu’s), all other structural parameters agree within less than 2 csu’s. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å2 as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements – an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å. PMID:25295177
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Refining Radchem Detectors: Iridium
NASA Astrophysics Data System (ADS)
Arnold, C. W.; Bredeweg, T. A.; Vieira, D. J.; Bond, E. M.; Jandel, M.; Rusev, G.; Moody, W. A.; Ullmann, J. L.; Couture, A. J.; Mosby, S.; O'Donnell, J. M.; Haight, R. C.
2013-10-01
Accurate determination of neutron fluence is an important diagnostic of nuclear device performance, whether the device is a commercial reactor, a critical assembly or an explosive device. One important method for neutron fluence determination, generally referred to as dosimetry, is based on exploiting various threshold reactions of elements such as iridium. It is possible to infer details about the integrated neutron energy spectrum to which the dosimetry sample or ``radiochemical detector'' was exposed by measuring specific activation products post-irradiation. The ability of radchem detectors like iridium to give accurate neutron fluence measurements is limited by the precision of the cross-sections in the production/destruction network (189Ir-193Ir). The Detector for Advanced Neutron Capture Experiments (DANCE) located at LANSCE is ideal for refining neutron capture cross sections of iridium isotopes. Recent results from a measurement of neutron capture on 193-Ir are promising. Plans to measure other iridium isotopes are underway.
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Adaptive mesh refinement in curvilinear body-fitted grid systems
NASA Astrophysics Data System (ADS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-10-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Grain refinement in undercooled metals
Xiao, J.Z.; Yang, H.; Kui, H.W.
1998-12-31
Recently, it was demonstrated that grain refinement in metals can take place through two mechanisms, namely, dynamic nucleation and remelting of initially formed dendrites. In this study, it was found that Ni{sub 99.45}B{sub 0.55} undergoes grain refinement, both by dynamic nucleation or by remelting, depending on the initial bulk undercooling just before crystallization. The nature of the grain refinement process is confirmed by microstructural analysis of the undercooled specimens.
NASA Astrophysics Data System (ADS)
Kolev, Mikhail K.
2016-07-01
Over the last decades the collaboration between scientists from biology, medicine and pharmacology on one side and scholars from mathematics, physics, mechanics and computer science on the other has led to better understanding of the properties of living systems, the mechanisms of their functioning and interactions with the environment and to the development of new therapies for various disorders and diseases. The target paper [1] by Ben Amar and Bianca presents a detailed description of the research methods and techniques used by biomathematicians, bioinformaticians, biomechanicians and biophysicists for studying biological systems, and in particular in the context of pathological fibrosis.
Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.
1998-06-01
The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
High resolution single particle refinement in EMAN2.1.
Bell, James M; Chen, Muyuan; Baldwin, Philip R; Ludtke, Steven J
2016-05-01
EMAN2.1 is a complete image processing suite for quantitative analysis of grayscale images, with a primary focus on transmission electron microscopy, with complete workflows for performing high resolution single particle reconstruction, 2-D and 3-D heterogeneity analysis, random conical tilt reconstruction and subtomogram averaging, among other tasks. In this manuscript we provide the first detailed description of the high resolution single particle analysis pipeline and the philosophy behind its approach to the reconstruction problem. High resolution refinement is a fully automated process, and involves an advanced set of heuristics to select optimal algorithms for each specific refinement task. A gold standard FSC is produced automatically as part of refinement, providing a robust resolution estimate for the final map, and this is used to optimally filter the final CTF phase and amplitude corrected structure. Additional methods are in-place to reduce model bias during refinement, and to permit cross-validation using other computational methods.
High resolution single particle refinement in EMAN2.1.
Bell, James M; Chen, Muyuan; Baldwin, Philip R; Ludtke, Steven J
2016-05-01
EMAN2.1 is a complete image processing suite for quantitative analysis of grayscale images, with a primary focus on transmission electron microscopy, with complete workflows for performing high resolution single particle reconstruction, 2-D and 3-D heterogeneity analysis, random conical tilt reconstruction and subtomogram averaging, among other tasks. In this manuscript we provide the first detailed description of the high resolution single particle analysis pipeline and the philosophy behind its approach to the reconstruction problem. High resolution refinement is a fully automated process, and involves an advanced set of heuristics to select optimal algorithms for each specific refinement task. A gold standard FSC is produced automatically as part of refinement, providing a robust resolution estimate for the final map, and this is used to optimally filter the final CTF phase and amplitude corrected structure. Additional methods are in-place to reduce model bias during refinement, and to permit cross-validation using other computational methods. PMID:26931650
Modern refining and petrochemical equipment
Pugach, V.V.
1995-07-01
Petroleum refining and petroleum chemistry are characterized by a whole set of manufacturing processes and methods, whose application depends on the initial raw material and the final products. Therefore, refining and petrochemical equipment has many different operational principles, design solutions, and materials. The activities of the Russian Petroleum Industry are discussed.
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Deformable complex network for refining low-resolution X-ray structures
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint with the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.
Madani, Safoura; Coors, Anja; Haddioui, Abdelmajid; Ksibi, Mohamed; Pereira, Ruth; Paulo Sousa, José; Römbke, Jörg
2015-09-01
Mining activity is an important economic activity in several North Atlantic Treaty Organization (NATO) and North African countries. Within their territory derelict or active mining explorations represent risks to surrounding ecosystems, but engineered-based remediation processes are usually too expensive to be an option for the reclamation of these areas. A project funded by NATO was performed, with the aim of finding a more eco-friendly solution for reclamation of these areas. As part of an overall risk assessment, the risk of contaminated soils to selected soil organisms was evaluated. The main question addressed was: Does the metal-contaminated soils from a former iron mine located at Ait Amar (Morocco),which was abandoned in the mid-Sixties, affect the reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer)? Soil samples were taken at 20 plots along four transects covering the mine area and at a reference site about 15km away from the mine. The soils were characterized pedologically and chemically, which showed a heterogeneous pattern of metal contamination (mainly cadmium, copper, and chromium, sometimes at concentrations higher than European soil trigger values). The reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer) was studied using standard laboratory tests according to OECD guidelines 220 (2004) and 226 (2008). The number of juveniles of E. bigeminus was reduced at several plots with high concentrations of Cd or Cu (the latter in combination with low pH values). There was nearly no effect of the metal contaminated soils on the reproduction of H. aculeifer. The overall lack of toxicity at the majority of the studied plots is probably caused by the low availability of the metals in these soils unless soil pH was very low. Different exposure pathways are likely responsible for the different reaction of mites and enchytraeids (hard-bodied versus soft-bodied organisms). The
Madani, Safoura; Coors, Anja; Haddioui, Abdelmajid; Ksibi, Mohamed; Pereira, Ruth; Paulo Sousa, José; Römbke, Jörg
2015-09-01
Mining activity is an important economic activity in several North Atlantic Treaty Organization (NATO) and North African countries. Within their territory derelict or active mining explorations represent risks to surrounding ecosystems, but engineered-based remediation processes are usually too expensive to be an option for the reclamation of these areas. A project funded by NATO was performed, with the aim of finding a more eco-friendly solution for reclamation of these areas. As part of an overall risk assessment, the risk of contaminated soils to selected soil organisms was evaluated. The main question addressed was: Does the metal-contaminated soils from a former iron mine located at Ait Amar (Morocco),which was abandoned in the mid-Sixties, affect the reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer)? Soil samples were taken at 20 plots along four transects covering the mine area and at a reference site about 15km away from the mine. The soils were characterized pedologically and chemically, which showed a heterogeneous pattern of metal contamination (mainly cadmium, copper, and chromium, sometimes at concentrations higher than European soil trigger values). The reproduction of enchytraeids (Enchytraeus bigeminus) and predatory mites (Hypoaspis aculeifer) was studied using standard laboratory tests according to OECD guidelines 220 (2004) and 226 (2008). The number of juveniles of E. bigeminus was reduced at several plots with high concentrations of Cd or Cu (the latter in combination with low pH values). There was nearly no effect of the metal contaminated soils on the reproduction of H. aculeifer. The overall lack of toxicity at the majority of the studied plots is probably caused by the low availability of the metals in these soils unless soil pH was very low. Different exposure pathways are likely responsible for the different reaction of mites and enchytraeids (hard-bodied versus soft-bodied organisms). The
Toward a consistent framework for high order mesh refinement schemes in numerical relativity
NASA Astrophysics Data System (ADS)
Mongwane, Bishop
2015-05-01
It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothly match the phase velocity of coarser grids. We apply the method to test problems involving propagating waves and show a significant reduction in spurious reflections.
Error bounds from extra precise iterative refinement
Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason
2005-02-07
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.
Monitoring, Controlling, Refining Communication Processes
ERIC Educational Resources Information Center
Spiess, John
1975-01-01
Because internal communications are essential to school system success, monitoring, controlling, and refining communicative processes have become essential activities for the chief school administrator. (Available from Buckeye Association of School Administrators, 750 Brooksedge Blvd., Westerville, Ohio 43081) (Author/IRT)
Refining the shifted topological vertex
Drissi, L. B.; Jehjouh, H.; Saidi, E. H.
2009-01-15
We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Zone refining of plutonium metal
Blau, M.S.
1994-08-01
The zone refining process was applied to Pu metal containing known amounts of impurities. Rod specimens of plutonium metal were melted into and contained in tantalum boats, each of which was passed horizontally through a three-turn, high-frequency coil in such a manner as to cause a narrow molten zone to pass through the Pu metal rod 10 times. The impurity elements Co, Cr, Fe, Ni, Np, U were found to move in the same direction as the molten zone as predicted by binary phase diagrams. The elements Al, Am, and Ga moved in the opposite direction of the molten zone as predicted by binary phase diagrams. As the impurity alloy was zone refined, {delta}-phase plutonium metal crystals were produced. The first few zone refining passes were more effective than each later pass because an oxide layer formed on the rod surface. There was no clear evidence of better impurity movement at the slower zone refining speed. Also, constant or variable coil power appeared to have no effect on impurity movement during a single run (10 passes). This experiment was the first step to developing a zone refining process for plutonium metal.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
NASA Astrophysics Data System (ADS)
Guerrini, Luca
2016-07-01
Martine Ben Amar and Carlo Bianca have written a valuable paper [1], which is a timely review of the different theoretical tools for the modeling of physiological and pathological fibrosis existing in the literature. The review [1] is written with clarity and in a simple way, which makes it understandable to a wide audience. The author presents an exhaustive exposition of the interplay between the different scholars which works in the modeling of fibrosis diseases and a survey of the main theoretical approaches, among others, ODE-based models, PDE-based models, models with internal structure, mechanics of continuum approach, agent-based models. A critical analysis discusses their applicability, including advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Kachapova, Farida
2016-07-01
Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
Successive refinement lattice vector quantization.
Mukherjee, Debargha; Mitra, Sanjit K
2002-01-01
Lattice Vector quantization (LVQ) solves the complexity problem of LBG based vector quantizers, yielding very general codebooks. However, a single stage LVQ, when applied to high resolution quantization of a vector, may result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive uniform quantization of vectors without having to sacrifice the mean- squared-error advantage of lattice quantization. A successive refinement uniform vector quantization methodology is developed, where the codebooks in successive stages are all lattice codebooks, each in the shape of the Voronoi regions of the lattice at the previous stage. Such Voronoi shaped geometric lattice codebooks are named Voronoi lattice VQs (VLVQ). Measures of efficiency of successive refinement are developed based on the entropy of the indices transmitted by the VLVQs. Additionally, a constructive method for asymptotically optimal uniform quantization is developed using tree-structured subset VLVQs in conjunction with entropy coding. The methodology developed here essentially yields the optimal vector counterpart of scalar "bitplane-wise" refinement. Unfortunately it is not as trivial to implement as in the scalar case. Furthermore, the benefits of asymptotic optimality in tree-structured subset VLVQs remain elusive in practical nonasymptotic situations. Nevertheless, because scalar bitplane- wise refinement is extensively used in modern wavelet image coders, we have applied the VLVQ techniques to successively refine vectors of wavelet coefficients in the vector set-partitioning (VSPIHT) framework. The results are compared against SPIHT and the previous successive approximation wavelet vector quantization (SA-W-VQ) results of Sampson, da Silva and Ghanbari.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. |
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
Structured adaptive mesh refinement on the connection machine
Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )
1993-01-01
Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.
ERIC Educational Resources Information Center
Hazelton, Alexander E.; And Others
Through joint planning with a number of school districts and the Region X Title I Technical Assistance Center, and with the help of a Title I Refinement grant, Alaska has developed a system of data storage and retrieval using microcomputers that assists small school districts in the evaluation and reporting of their Title I programs. Although this…
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Vacuum Refining of Molten Silicon
NASA Astrophysics Data System (ADS)
Safarian, Jafar; Tangstad, Merete
2012-12-01
Metallurgical fundamentals for vacuum refining of molten silicon and the behavior of different impurities in this process are studied. A novel mass transfer model for the removal of volatile impurities from silicon in vacuum induction refining is developed. The boundary conditions for vacuum refining system—the equilibrium partial pressures of the dissolved elements and their actual partial pressures under vacuum—are determined through thermodynamic and kinetic approaches. It is indicated that the vacuum removal kinetics of the impurities is different, and it is controlled by one, two, or all the three subsequent reaction mechanisms—mass transfer in a melt boundary layer, chemical evaporation on the melt surface, and mass transfer in the gas phase. Vacuum refining experimental results of this study and literature data are used to study the model validation. The model provides reliable results and shows correlation with the experimental data for many volatile elements. Kinetics of phosphorus removal, which is an important impurity in the production of solar grade silicon, is properly predicted by the model, and it is observed that phosphorus elimination from silicon is significantly increased with increasing process temperature.
Method for refining contaminated iridium
Heshmatpour, B.; Heestand, R.L.
1982-08-31
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Method for refining contaminated iridium
Heshmatpour, Bahman; Heestand, Richard L.
1983-01-01
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations. PMID:26723635
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.235 How does a refiner obtain approval as a small refiner? (a) Applications for small refiner....225(d), which must be submitted by June 1, 2002. (b) Applications for small refiner status must...
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Entitlements exemptions for new refiners
Not Available
1980-02-29
The practice of exempting start-up inventories from entitlement requirements for new refiners has been called into question by the Office of Hearings and Appeals and other responsible Departmental officials. ERA with the assistance of the Office of General Counsel considering resolving the matter through rulemaking; however, by October 26, 1979 no rulemaking had been published. Because of the absence of published standards for use in granting these entitlements to new refineries, undue reliance was placed on individual judgements that could result in inequities to applicants and increase the potential for fraud and abuse. Recommendations are given as follows: (1) if the program for granting entitlements exemptions to new refiners is continued, the Administrator, ERA should promptly take action to adopt an appropriate regulation to formalize the program by establishing standards and controls that will assure consistent and equitable application; in addition, files containing adjustments given to new refiners should be made complete to support benefits already allowed; and (2) whether the program is continued or discontinued, the General Counsel and the Administrator, ERA, should coordiate on how to evaluate the propriety of inventory adjustments previously granted to new refineries.
A Refined Cauchy-Schwarz Inequality
ERIC Educational Resources Information Center
Mercer, Peter R.
2007-01-01
The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
Firing of pulverized solvent refined coal
Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.
1986-01-01
An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.
Grain Refinement of Deoxidized Copper
NASA Astrophysics Data System (ADS)
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-08-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor (Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
Grain Refinement of Deoxidized Copper
NASA Astrophysics Data System (ADS)
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-10-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor ( Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Khokhlov, A. M.
1997-01-01
A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.
Winter, V.L.; Berg, R.S.; Dalton, L.J.
1998-06-01
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
Gaseous Refining of Anode Copper
NASA Astrophysics Data System (ADS)
Goyal, Pradeep; Themelis, N. J.; Zanchuk, Walter A.
1982-12-01
The refining of blister copper prior to casting into anodes consists of oxidizing the copper melt to remove sulfur and then reducing its oxygen content. The age-old "wood poling" technique for deoxidation is gradually being replaced by the injection of reducing gases through one or two tuyeres. Thermodynamic and mass transfer analysis as well as laboratory tests have shown that the operating efficiency of gas injection can be improved considerably by enhancing mixing and gas-liquid mass transfer conditions within the copper bath. The injection of inert gas through porous plugs offers a viable industrial means for effecting such an improvement.
Introducing robustness to maximum-likelihood refinement of electron-microsopy data
Scheres, Sjors H. W. Carazo, José-María
2009-07-01
An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy data is presented that is based on finite mixtures of multivariate t-distributions. Compared with the conventionally employed Gaussian mixture model, the t-distribution provides robustness against outliers in the data. An expectation-maximization algorithm for maximum-likelihood refinement of electron-microscopy images is presented that is based on fitting mixtures of multivariate t-distributions. The novel algorithm has intrinsic characteristics for providing robustness against atypical observations in the data, which is illustrated using an experimental test set with artificially generated outliers. Tests on experimental data revealed only minor differences in two-dimensional classifications, while three-dimensional classification with the new algorithm gave stronger elongation factor G density in the corresponding class of a structurally heterogeneous ribosome data set than the conventional algorithm for Gaussian mixtures.
Voronoi-Based Point-Placement for Three-Dimensional Delaunay-Refinement
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2015-01-01
An extension of the restricted Delaunay-refinement algorithm for three-dimensional tetrahedral mesh generation is described, in which an off-centre type point-placement scheme is utilised. It is shown that the use of generalised Steiner points, positioned along edges in the associated Voronoi complex, typically leads to improvements in the overall size, quality and grading of the resulting tetrahedral meshes. The new algorithm can be viewed as a Frontal-Delaunay approach - a hybridisation of conventional Delaunay-refinement and advancing-front techniques, in which new vertices are positioned to satisfy both element size- and shapeconstraints. The new method is shown to inherit many of the best features of classical Delaunay-refinement and advancing-front type algorithms, combining good practical performance with theoretical robustness. Experimental comparisons show that
Zone refining of plutonium metal
1997-05-01
The purpose of this study was to investigate zone refining techniques for the purification of plutonium metal. The redistribution of 10 impurity elements from zone melting was examined. Four tantalum boats were loaded with plutonium impurity alloy, placed in a vacuum furnace, heated to 700{degrees}C, and held at temperature for one hour. Ten passes were made with each boat. Metallographic and chemical analyses performed on the plutonium rods showed that, after 10 passes, moderate movement of certain elements were achieved. Molten zone speeds of 1 or 2 inches per hour had no effect on impurity element movement. Likewise, the application of constant or variable power had no effect on impurity movement. The study implies that development of a zone refining process to purify plutonium is feasible. Development of a process will be hampered by two factors: (1) the effect on impurity element redistribution of the oxide layer formed on the exposed surface of the material is not understood, and (2) the tantalum container material is not inert in the presence of plutonium. Cold boat studies are planned, with higher temperature and vacuum levels, to determine the effect on these factors. 5 refs., 1 tab., 5 figs.
Grain refinement in undercooled nickel
Leung, K.K.; Chiu, C.P.; Kui, H.W.
1995-05-15
In this paper, the microstructures of undercooled Ni that solidified at various initial bulk undercoolings are examined in detail in order to understand the mechanism of grain refinement in metallic systems. Molten Ni contracts on solidification. In the experiment, since it was covered by molten glass flux, upon crystallization cavities had to form to accommodate the rapid volume change if the undercooled specimen remained in contact with the glass flux, which could not flow so readily. The adhesiveness between Ni and glass flux was confirmed by removing them from a furnace after the whole system had been cooled down to room temperature. Furthermore, it was clear from the micrographs that after a cavity had formed, it did not collapse. It can therefore be concluded that smaller grains are found to concentrate near the void along the minor axis. At still higher undercoolings, the effect was so violent that the voids took irregular shapes. Again, the grains near the cavity are somewhat smaller than those further away. Accordingly, the authors conclude that grain refinement in undercooled Ni was brought about by dynamic nucleation as the cavities formed.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N
2004-04-15
We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Bayesian refinement of protein functional site matching
Mardia, Kanti V; Nyirongo, Vysaul B; Green, Peter J; Gold, Nicola D; Westhead, David R
2007-01-01
Background Matching functional sites is a key problem for the understanding of protein function and evolution. The commonly used graph theoretic approach, and other related approaches, require adjustment of a matching distance threshold a priori according to the noise in atomic positions. This is difficult to pre-determine when matching sites related by varying evolutionary distances and crystallographic precision. Furthermore, sometimes the graph method is unable to identify alternative but important solutions in the neighbourhood of the distance based solution because of strict distance constraints. We consider the Bayesian approach to improve graph based solutions. In principle this approach applies to other methods with strict distance matching constraints. The Bayesian method can flexibly incorporate all types of prior information on specific binding sites (e.g. amino acid types) in contrast to combinatorial formulations. Results We present a new meta-algorithm for matching protein functional sites (active sites and ligand binding sites) based on an initial graph matching followed by refinement using a Markov chain Monte Carlo (MCMC) procedure. This procedure is an innovative extension to our recent work. The method accounts for the 3-dimensional structure of the site as well as the physico-chemical properties of the constituent amino acids. The MCMC procedure can lead to a significant increase in the number of significant matches compared to the graph method as measured independently by rigorously derived p-values. Conclusion MCMC refinement step is able to significantly improve graph based matches. We apply the method to matching NAD(P)(H) binding sites within single Rossmann fold families, between different families in the same superfamily, and in different folds. Within families sites are often well conserved, but there are examples where significant shape based matches do not retain similar amino acid chemistry, indicating that even within families the
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Evolving a puncture black hole with fixed mesh refinement
Imbiriba, Breno; Baker, John; Centrella, Joan; Meter, James R. van; Choi, Dae-Il; Fiske, David R.; Brown, J. David; Olson, Kevin
2004-12-15
We present an algorithm for treating mesh refinement interfaces in numerical relativity. We discuss the behavior of the solution near such interfaces located in the strong-field regions of dynamical black hole spacetimes, with particular attention to the convergence properties of the simulations. In our applications of this technique to the evolution of puncture initial data with vanishing shift, we demonstrate that it is possible to simultaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult and wave extraction is meaningful.
Silicon refinement by chemical vapor transport
NASA Technical Reports Server (NTRS)
Olson, J.
1984-01-01
Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.
Refining the shallow slip deficit
NASA Astrophysics Data System (ADS)
Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois
2016-03-01
Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... knowledge. (4) Name, address, phone number, facsimile number and E-mail address (if available) of a... disapproved, the refiner must comply with the standards in § 80.195. (h) If EPA finds that a refiner...
High Quality Visual Hull Reconstruction by Delaunay Refinement
NASA Astrophysics Data System (ADS)
Liu, Xin; Gavrilova, Marina L.
In this paper, we employ Delaunay triangulation techniques to reconstruct high quality visual hulls. From a set of calibrated images, the algorithm first computes a sparse set of initial points with a dandelion model and builds a Delaunay triangulation restricted to the visual hull surface. It then iteratively refines the triangulation by inserting new sampling points, which are the intersections between the visual hull surface and the Voronoi edges dual to the triangulation's facets, until certain criteria are satisfied. The intersections are computed by cutting line segments with the visual hull, which is then converted to the problem of intersecting a line segment with polygonal contours in 2D. A barrel-grid structure is developed to quickly pick out possibly intersecting contour segments and thus accelerate the process of intersecting in 2D. Our algorithm is robust, fast, fully adaptive, and it produces precise and smooth mesh models composed of well-shaped triangles.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.
2014-09-01
SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Pneumatic conveying of pulverized solvent refined coal
Lennon, Dennis R.
1984-11-06
A method for pneumatically conveying solvent refined coal to a burner under conditions of dilute phase pneumatic flow so as to prevent saltation of the solvent refined coal in the transport line by maintaining the transport fluid velocity above approximately 95 ft/sec.
27 CFR 21.127 - Shellac (refined).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Shellac (refined). 21.127....127 Shellac (refined). (a) Arsenic content. Not more than 1.4 parts per million as determined by the... petroleum ether and mix thoroughly. Add approximately 2 liters of water and separate a portion of the...
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Improving Flow Response of a Variable-rate Aerial Application System by Interactive Refinement
Technology Transfer Automated Retrieval System (TEKTRAN)
Experiments were conducted to evaluate response of a variable-rate aerial application controller to changing flow rates and to improve its response at correspondingly varying system pressures. System improvements have been made by refinement of the control algorithms over time in collaboration with ...
North Dakota Refining Capacity Study
Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca
2011-01-05
According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Protein NMR structures refined without NOE data.
Ryu, Hyojung; Kim, Tae-Rae; Ahn, SeonJoo; Ji, Sunyoung; Lee, Jinhyuk
2014-01-01
The refinement of low-quality structures is an important challenge in protein structure prediction. Many studies have been conducted on protein structure refinement; the refinement of structures derived from NMR spectroscopy has been especially intensively studied. In this study, we generated flat-bottom distance potential instead of NOE data because NOE data have ambiguity and uncertainty. The potential was derived from distance information from given structures and prevented structural dislocation during the refinement process. A simulated annealing protocol was used to minimize the potential energy of the structure. The protocol was tested on 134 NMR structures in the Protein Data Bank (PDB) that also have X-ray structures. Among them, 50 structures were used as a training set to find the optimal "width" parameter in the flat-bottom distance potential functions. In the validation set (the other 84 structures), most of the 12 quality assessment scores of the refined structures were significantly improved (total score increased from 1.215 to 2.044). Moreover, the secondary structure similarity of the refined structure was improved over that of the original structure. Finally, we demonstrate that the combination of two energy potentials, statistical torsion angle potential (STAP) and the flat-bottom distance potential, can drive the refinement of NMR structures.
Firing of pulverized solvent refined coal
Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.
1990-05-15
A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.
Refining of metallurgical-grade silicon
NASA Technical Reports Server (NTRS)
Dietl, J.
1986-01-01
A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.
A Selective Refinement Approach for Computing the Distance Functions of Curves
Laney, D A; Duchaineau, M A; Max, N L
2000-12-01
We present an adaptive signed distance transform algorithm for curves in the plane. A hierarchy of bounding boxes is required for the input curves. We demonstrate the algorithm on the isocontours of a turbulence simulation. The algorithm provides guaranteed error bounds with a selective refinement approach. The domain over which the signed distance function is desired is adaptively triangulated and piecewise discontinuous linear approximations are constructed within each triangle. The resulting transform performs work only were requested and does not rely on a preset sampling rate or other constraints.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics
Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506
An automatic and fast centerline extraction algorithm for virtual colonoscopy.
Jiang, Guangxiang; Gu, Lixu
2005-01-01
This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406
Refined Phenotyping of Modic Changes
Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino
2016-01-01
. The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491
Rice, L M; Brünger, A T
1994-08-01
A reduced variable conformational sampling strategy for macromolecules based on molecular dynamics in torsion angle space is evaluated using crystallographic refinement as a prototypical search problem. Bae and Haug's algorithm for constrained dynamics [Bae, D.S., Haug, E.J. A recursive formulation for constrained mechanical system dynamics. Mech. Struct. Mach. 15:359-382, 1987], originally developed for robotics, was used. Their formulation solves the equations of motion exactly for arbitrary holonomic constraints, and hence differs from commonly used approximation algorithms. It uses gradients calculated in Cartesian coordinates, and thus also differs from internal coordinate formulations. Molecular dynamics can be carried out at significantly higher temperatures due to the elimination of the high frequency bond and angle vibrations. The sampling strategy presented here combines high temperature torsion angle dynamics with repeated trajectories using different initial velocities. The best solutions can be identified by the free R value, or the R value if experimental phase information is appropriately included in the refinement. Applications to crystallographic refinement. Applications to crystallographic refinement show a significantly increased radius of convergence over conventional techniques. For a test system with diffraction data to 2 A resolution, slow-cooling protocols fail to converge if the backbone atom root mean square (rms) coordinate deviation from the crystal structure is greater than 1.25 A, but torsion angle refinement can correct backbone atom rms coordinate deviations up to approximately 1.7 A.
U.S. Refining Capacity Utilization
1995-01-01
This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.
1991 worldwide refining and gas processing directory
Not Available
1990-01-01
This book ia an authority for immediate information on the industry. You can use it to find new business, analyze market trends, and to stay in touch with existing contacts while making new ones. The possibilities for business applications are numerous. Arranged by country, all listings in the directory include address, phone, fax and telex numbers, a description of the company's activities, names of key personnel and their titles, corporate headquarters, branch offices and plant sites. This newly revised edition lists more than 2000 companies and nearly 3000 branch offices and plant locations. This east-to-use reference also includes several of the most vital and informative surveys of the industry, including the U.S. Refining Survey, the Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Wax Capacities Report from the National Petroleum Refiner's Association.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
Refining a relativistic, hydrodynamic solver: Admitting ultra-relativistic flows
NASA Astrophysics Data System (ADS)
Bernstein, J. P.; Hughes, P. A.
2009-09-01
We have undertaken the simulation of hydrodynamic flows with bulk Lorentz factors in the range 102-106. We discuss the application of an existing relativistic, hydrodynamic primitive variable recovery algorithm to a study of pulsar winds, and, in particular, the refinement made to admit such ultra-relativistic flows. We show that an iterative quartic root finder breaks down for Lorentz factors above 102 and employ an analytic root finder as a solution. We find that the former, which is known to be robust for Lorentz factors up to at least 50, offers a 24% speed advantage. We demonstrate the existence of a simple diagnostic allowing for a hybrid primitives recovery algorithm that includes an automatic, real-time toggle between the iterative and analytical methods. We further determine the accuracy of the iterative and hybrid algorithms for a comprehensive selection of input parameters and demonstrate the latter’s capability to elucidate the internal structure of ultra-relativistic plasmas. In particular, we discuss simulations showing that the interaction of a light, ultra-relativistic pulsar wind with a slow, dense ambient medium can give rise to asymmetry reminiscent of the Guitar nebula leading to the formation of a relativistic backflow harboring a series of internal shockwaves. The shockwaves provide thermalized energy that is available for the continued inflation of the PWN bubble. In turn, the bubble enhances the asymmetry, thereby providing positive feedback to the backflow.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2003-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2002-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called 'structured' (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2002-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called "structured" (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
On-Orbit Model Refinement for Controller Redesign
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
1998-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.
ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS
Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others
2014-04-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
A Cartesian grid approach with hierarchical refinement for compressible flows
NASA Technical Reports Server (NTRS)
Quirk, James J.
1994-01-01
Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Increasing levels of assistance in refinement of knowledge-based retrieval systems
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Kedar, Smadar; Pell, Barney
1994-01-01
The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2015-06-09
A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
Robust Refinement as Implemented in TOPAS
Stone, K.; Stephens, P
2010-01-01
A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions
Refiners boost crude capacity; Petrochemical production up
Corbett, R.A.
1988-03-21
Continuing demand strength in refined products and petrochemical markets caused refiners to boost crude-charging capacity slightly again last year, and petrochemical producers to increase production worldwide. Product demand strength is, in large part, due to stable product prices resulting from a stabilization of crude oil prices. Crude prices strengthened somewhat in 1987. That, coupled with fierce product competition, unfortunately drove refining margins negative in many regions of the U.S. during the last half of 1987. But with continued strong demand for gasoline, and an increased demand for higher octane gasoline, margins could turn positive by 1989 and remain so for a few years. U.S. refiners also had to have facilities in place to meet the final requirements of the U.S. Environmental Protection Agency's lead phase-down rules on Jan. 1, 1988. In petrochemicals, plastics demand dept basic petrochemical plants at good utilization levels worldwide. U.S. production of basics such as ethylene and propylene showed solid increases. Many of the derivatives of the basic petrochemical products also showed good production gains. Increased petrochemical production and high plant utilization rates didn't spur plant construction projects, however. Worldwide petrochemical plant projects declined slightly from 1986 figures.
Refiners respond to strategic driving forces
Gonzalez, R.G.
1996-05-01
Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.
Laser furnace technology for zone refining
NASA Technical Reports Server (NTRS)
Griner, D. B.
1984-01-01
A carbon dioxide laser experiment facility is constructed to investigate the problems in using a laser beam to zone refine semiconductor and metal crystals. The hardware includes a computer to control scan mirrors and stepper motors to provide a variety of melt zone patterns. The equipment and its operating procedures are described.
Energy Bandwidth for Petroleum Refining Processes
none,
2006-10-01
The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.
Refining aggregate exposure: example using parabens.
Cowan-Ellsberry, Christina E; Robison, Steven H
2009-12-01
The need to understand and estimate quantitatively the aggregate exposure to ingredients used broadly in a variety of product types continues to grow. Currently aggregate exposure is most commonly estimated by using a very simplistic approach of adding or summing the exposures from all the individual product types in which the chemical is used. However, the more broadly the ingredient is used in related consumer products, the more likely this summation will result in an unrealistic estimate of exposure because individuals in the population vary in their patterns of product use including co-use and non-use. Furthermore the ingredient may not be used in all products of a given type. An approach is described for refining this aggregate exposure using data on (1) co-use and non-use patterns of product use, (2) extent of products in which the ingredient is used and (3) dermal penetration and metabolism. This approach and the relative refinement in the aggregate exposure from incorporating these data is illustrated using methyl, n-propyl, n-butyl and ethyl parabens, the most widely used preservative system in personal care and cosmetic products. When these refining factors were used, the aggregate exposure compared to the simple addition approach was reduced by 51%, 58%, 90% and 92% for methyl, n-propyl, n-butyl and ethyl parabens, respectively. Since biomonitoring integrates all sources and routes of exposure, the estimates using this approach were compared to available paraben biomonitoring data. Comparison to the 95th percentile of these data showed that these refined estimates were still conservative by factors of 2-92. All of our refined estimates of aggregate exposure are less than the ADI of 10mg/kg/day for parabens.
Refining industry trends: Europe and surroundings
Guariguata, U.G.
1997-05-01
The European refining industry, along with its counterparts, is struggling with low profitability due to excess primary and conversion capacity, high operating costs and impending decisions of stringent environmental regulations that will require significant investments with hard to justify returns. This region was also faced in the early 1980s with excess capacity on the order of 4 MMb/d and satisfying the {open_quotes}at that point{close_quotes} demand by operating at very low utilization rates (60%). As was the case in the US, the rebalancing of the capacity led to the closure of some 51 refineries. Since the early 1990s, the increase in demand growth has essentially balanced the capacity threshold and utilization rates are settled around the 90% range. During the last two decades, the major oil companies have reduced their presence in the European refining sector, giving some state oil companies and producing countries the opportunity to gain access to the consumer market through the purchase of refining capacity in various countries-specifically, Kuwait in Italy; Libya and Venezuela in Germany; and Norway in other areas of Scandinavia. Although the market share for this new cast of characters remains small (4%) relative to participation by the majors (35%), their involvement in the European refining business set the foundation whereby US independent refiners relinquished control over assets that could not be operated profitably as part of a previous vertically integrated structure, unless access to the crude was ensured. The passage of time still seems to render this model valid.
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., phone number, facsimile number, and e-mail address of a corporate contact person. (d) Approval of a...) beginning with the averaging period beginning July 1, 2012. (f) If EPA finds that a refiner provided...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
..., phone number, facsimile number, and e-mail address of a corporate contact person. (d) Approval of a...) beginning with the averaging period beginning July 1, 2012. (f) If EPA finds that a refiner provided...
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Dinosaurs can fly -- High performance refining
Treat, J.E.
1995-09-01
High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.
Crystallization in lactose refining-a review.
Wong, Shin Yee; Hartel, Richard W
2014-03-01
In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry.
The indirect electrochemical refining of lunar ores
NASA Technical Reports Server (NTRS)
Semkow, Krystyna W.; Sammells, Anthony F.
1987-01-01
Recent work performed on an electrolytic cell is reported which addresses the implicit limitations in various approaches to refining lunar ores. The cell uses an oxygen vacancy conducting stabilized zirconia solid electrolyte to effect separation between a molten salt catholyte compartment where alkali metals are deposited, and an oxygen-evolving anode of composition La(0.89)Sr(0.1)MnO3. The cell configuration is shown and discussed along with a polarization curve and a steady-state current-voltage curve. In a practical cell, cathodically deposited liquid lithium would be continuously removed from the electrolytic cell and used as a valuable reducing agent for ore refining under lunar conditions. Oxygen would be indirectly electrochemically extracted from lunar ores for breathing purposes.
Improve corrosion control in refining processes
Kane, R.D.; Cayard, M.S.
1995-11-01
New guidelines show how to control corrosion and environmental cracking of process equipment when processing feedstocks containing sulfur and/or naphthenic acids. To be cost competitive refiners must be able to process crudes of opportunity. These feedstocks when processed under high temperatures and pressures and alkaline conditions can cause brittle cracks and blisters in susceptible steel-fabricated equipment. Even with advances in steel metallurgy, wet H{sub 2}S cracking continues to be a problem. New research data shows that process conditions such as temperature, pH and flowrate are key factors in the corrosion process. Before selecting equipment material, operators must understand the corrosion mechanisms present within process conditions. Several case histories investigate the corrosion reactions found when refining naphthenic crudes and operating amine gas-sweetening systems. These examples show how to use process controls, inhibitors and/or metallurgy to control corrosion and environmental cracking, to improve material selection and to extend equipment service life.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2014-11-25
This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.
Structured Adaptive Mesh Refinement Application Infrastructure
2010-07-15
SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.
Substance abuse in the refining industry
Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )
1989-01-01
In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Protectionism and the US refining industry
Brossard, E.B.
1985-01-01
Almost unnoticed in the US press is the entrance of the US in the international market as a major exporter of oil products. The author describes his views on protective tariffs particularly with regard to the US refinery industry. He concludes that the new demands for protectionism by some refiners, if enacted into legislation by Congress, would not only raise the cost to all energy consumers but would also adversely affect US American industry, commencing with US exporting refiners that have recently entered the international products market. There would be retaliation by other countries and massive defaults by countries like Mexico. It is not in the national interest for the US to engage in oil tariffs or quotas that may harm the economies of our friendly trading partners - partners upon whom the US is dependent for one-third of its oil consumption and whom the US will need in time of crisis. Discussed are the US oil industry, OPEC, Venezuela, shutdowns, modernization, exports, imports, spot market, Western European refiners, and internationalization vs protectionism. 19 tabs. (DMC)
Problems persist for French refining sector
Not Available
1992-07-27
This paper reports that France's refiners face a continuing shortfall of middle distillate capacity and a persistent surplus of heavy fuel oil. That's the main conclusion of the official Hydrocarbon Directorate's report on how France's refining sector performed in 1991. Imports up---The directorate noted that although net production of refined products in French refineries rose to 1.534 million b/d in 1991 from 1.48 million b/d in 1990, products imports jumped 9.7% to 602,000 b/d in the period. The glut of heavy fuel oil eased to some extent last year because French nuclear power capacity, heavily dependent on ample water supplies, was crimped by drought. That spawned fuel switching. The most note worthy increase in imports was for motor diesel, climbing to 176,000 b/d from 148,000 b/d in 1990. Tax credits are spurring French consumption of that fuel. For the first time, consumption of motor diesel in 1991 outstripped that of gasoline at 374,000 b/d and 356,000 b/d respectively.
Arctic Storms in a Regionally Refined Atmospheric General Circulation Model
NASA Astrophysics Data System (ADS)
Roesler, E. L.; Taylor, M.; Boslough, M.; Sullivan, S.
2014-12-01
Regional refinement in an atmospheric general circulation model is a new tool in atmospheric modeling. A regional high-resolution solution can be obtained without the computational cost of running a global high-resolution simulation as global climate models have increasing ability to resolve smaller spatial scales. Previous work has shown high-resolution simulations, i.e. 1/8 degree, and variable resolution utilities have resolved more fine-scale structure and mesoscale storms in the atmosphere than their low-resolution counterparts. We will describe an experiment designed to identify and study Arctic storms at two model resolutions. We used the Community Atmosphere Model, version 5, with the Spectral Element dynamical core at 1/8-degree and 1 degree horizontal resolutions to simulate the climatological year of 1850. Storms were detected using a low-pressure minima and vorticity maxima - finding algorithm. It was found the high-resolution 1/8-degree simulation had more storms in the Northern Hemisphere than the low-resolution 1-degree simulation. A variable resolution simulation with a global low resolution of 1-degree and a high-resolution refined region of 1/8 degree over a region in the Arctic is planned. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2014-16460A
Intelligent perturbation algorithms for space scheduling optimization
NASA Technical Reports Server (NTRS)
Kurtzman, Clifford R.
1991-01-01
Intelligent perturbation algorithms for space scheduling optimization are presented in the form of the viewgraphs. The following subject areas are covered: optimization of planning, scheduling, and manifesting; searching a discrete configuration space; heuristic algorithms used for optimization; use of heuristic methods on a sample scheduling problem; intelligent perturbation algorithms are iterative refinement techniques; properties of a good iterative search operator; dispatching examples of intelligent perturbation algorithm and perturbation operator attributes; scheduling implementations using intelligent perturbation algorithms; major advances in scheduling capabilities; the prototype ISF (industrial Space Facility) experiment scheduler; optimized schedule (max revenue); multi-variable optimization; Space Station design reference mission scheduling; ISF-TDRSS command scheduling demonstration; and example task - communications check.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
A Breeder Algorithm for Stellarator Optimization
NASA Astrophysics Data System (ADS)
Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.
2003-10-01
An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.
14. INTERIOR VIEW OF REFINING MILL, SHOWING CONVEYOR BELT IN ...
14. INTERIOR VIEW OF REFINING MILL, SHOWING CONVEYOR BELT IN PULVERIZING AND PACKING PLANT, LOOKING NORTH - Clay Spur Bentonite Plant & Camp, Refining Mill, Clay Spur Siding on Burlington Northern Railroad, Osage, Weston County, WY
8. VIEW OF CRUDE CRUSHING AND DRYING PLANT AT REFINING ...
8. VIEW OF CRUDE CRUSHING AND DRYING PLANT AT REFINING MILL, LOOKING NORTHEAST - Clay Spur Bentonite Plant & Camp, Refining Mill, Clay Spur Siding on Burlington Northern Railroad, Osage, Weston County, WY
Grain Refinement of Permanent Mold Cast Copper Base Alloys
M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo
2005-04-01
Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.
California refining in balance as Phase 2 deadline draws near
Adler, K.
1996-01-01
The impact of California`s 1996 RFG program on US markets and its implications for refiners worldwide is analyzed. The preparations in the last few months before refiners must produce California Phase 2 RFG are addressed. Subsequent articles will consider the process improvements made by refiners, the early implementation of the program, and what has been learned about refining, gasoline distribution, environmental benefits and consumer acceptance that can be replicated around the world.
TIRS stray light correction: algorithms and performance
NASA Astrophysics Data System (ADS)
Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki
2015-09-01
The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
h-Refinement for simple corner balance scheme of SN transport equation on distorted meshes
NASA Astrophysics Data System (ADS)
Yang, Rong; Yuan, Guangwei
2016-11-01
The transport sweep algorithm is a common method for solving discrete ordinate transport equation, but it breaks down once a concave cell appears in spatial meshes. To deal with this issue a local h-refinement for simple corner balance (SCB) scheme of SN transport equation on arbitrary quadrilateral meshes is presented in this paper by using a new subcell partition. It follows that a hybrid mesh with both triangle and quadrilateral cells is generated, and the geometric quality of these cells improves, especially it is ensured that all cells become convex. Combining with the original SCB scheme, an adaptive transfer algorithm based on the hybrid mesh is constructed. Numerical experiments are presented to verify the utility and accuracy of the new algorithm, especially for some application problems such as radiation transport coupled with Lagrangian hydrodynamic flow. The results show that it performs well on extremely distorted meshes with concave cells, on which the original SCB scheme does not work.
Adaptive refinement tools for tetrahedral unstructured grids
NASA Technical Reports Server (NTRS)
Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)
2011-01-01
An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.
Surface biotechnology for refining cochlear implants.
Tan, Fei; Walshe, Peter; Viani, Laura; Al-Rubeai, Mohamed
2013-12-01
The advent of the cochlear implant is phenomenal because it is the first surgical prosthesis that is capable of restoring one of the senses. The subsequent rapid evolution of cochlear implants through increasing complexity and functionality has been synchronized with the recent advancements in biotechnology. Surface biotechnology has refined cochlear implants by directly influencing the implant–tissue interface. Emerging surface biotechnology strategies are exemplified by nanofibrous polymeric materials, topographical surface modification, conducting polymer coatings, and neurotrophin-eluting implants. Although these novel developments have received individual attention in the recent literature, the time has come to investigate their collective applications to cochlear implants to restore lost hearing. PMID:24404581
Surface biotechnology for refining cochlear implants.
Tan, Fei; Walshe, Peter; Viani, Laura; Al-Rubeai, Mohamed
2013-12-01
The advent of the cochlear implant is phenomenal because it is the first surgical prosthesis that is capable of restoring one of the senses. The subsequent rapid evolution of cochlear implants through increasing complexity and functionality has been synchronized with the recent advancements in biotechnology. Surface biotechnology has refined cochlear implants by directly influencing the implant–tissue interface. Emerging surface biotechnology strategies are exemplified by nanofibrous polymeric materials, topographical surface modification, conducting polymer coatings, and neurotrophin-eluting implants. Although these novel developments have received individual attention in the recent literature, the time has come to investigate their collective applications to cochlear implants to restore lost hearing.
Formal language theory: refining the Chomsky hierarchy.
Jäger, Gerhard; Rogers, James
2012-07-19
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages).
Empirical Analysis and Refinement of Expert System Knowledge Bases
Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen
1986-01-01
Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.
Refinement Of Hexahedral Cells In Euler Flow Computations
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1996-01-01
Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.
Increased delignification by white rot fungi after pressure refining Miscanthus.
Baker, Paul W; Charlton, Adam; Hale, Mike D C
2015-01-01
Pressure refining, a pulp making process to separate fibres of lignocellulosic materials, deposits lignin granules on the surface of the fibres that could enable increased access to lignin degrading enzymes. Three different white rot fungi were grown on pressure refined (at 6 bar and 8 bar) and milled Miscanthus. Growth after 28 days showed highest biomass losses on milled Miscanthus compared to pressure refined Miscanthus. Ceriporiopsis subvermispora caused a significantly higher proportion of lignin removal when grown on 6 bar pressure refined Miscanthus compared to growth on 8 bar pressure refined Miscanthus and milled Miscanthus. RM22b followed a similar trend but Phlebiopsis gigantea SPLog6 did not. Conversely, C. subvermispora growing on pressure refined Miscanthus revealed that the proportion of cellulose increased. These results show that two of the three white rot fungi used in this study showed higher delignification on pressure refined Miscanthus than milled Miscanthus.
Deformable elastic network refinement for low-resolution macromolecular crystallography
Schröder, Gunnar F.; Levitt, Michael; Brunger, Axel T.
2014-09-01
An overview of applications of the deformable elastic network (DEN) refinement method is presented together with recommendations for its optimal usage. Crystals of membrane proteins and protein complexes often diffract to low resolution owing to their intrinsic molecular flexibility, heterogeneity or the mosaic spread of micro-domains. At low resolution, the building and refinement of atomic models is a more challenging task. The deformable elastic network (DEN) refinement method developed previously has been instrumental in the determinion of several structures at low resolution. Here, DEN refinement is reviewed, recommendations for its optimal usage are provided and its limitations are discussed. Representative examples of the application of DEN refinement to challenging cases of refinement at low resolution are presented. These cases include soluble as well as membrane proteins determined at limiting resolutions ranging from 3 to 7 Å. Potential extensions of the DEN refinement technique and future perspectives for the interpretation of low-resolution crystal structures are also discussed.
Adaptive Hybrid Mesh Refinement for Multiphysics Applications
Khamayseh, Ahmed K; de Almeida, Valmor F
2007-01-01
The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to maximize the equidistribution of a weighted geometric and/or solution function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate forecasting. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.
Rapid Glass Refiner Development Program, Final report
1995-02-20
A rapid glass refiner (RGR) technology which could be applied to both conventional and advanced class melting systems would significantly enhance the productivity and the competitiveness of the glass industry in the United States. Therefore, Vortec Corporation, with the support of the US Department of Energy (US DOE) under Cooperative Agreement No. DE-FC07-90ID12911, conducted a research and development program for a unique and innovative approach to rapid glass refining. To provide focus for this research effort, container glass was the primary target from among the principal glass types based on its market size and potential for significant energy savings. Container glass products represent the largest segment of the total glass industry accounting for 60% of the tonnage produced and over 40% of the annual energy consumption of 232 trillion Btu/yr. Projections of energy consumption and the market penetration of advanced melting and fining into the container glass industry yield a potential energy savings of 7.9 trillion Btu/yr by the year 2020.
Parallel adaptive mesh refinement techniques for plasticity problems
NASA Technical Reports Server (NTRS)
Barry, W. J.; Jones, M. T.; Plassmann, P. E.
1997-01-01
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement
Anninos, P; Fragile, P C; Salmonson, J D
2005-05-06
A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams
Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.
2003-11-04
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.
Hornung, R.D.
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The total corporate crude oil capacity of each refinery as reported to the Energy Information... and had an average crude oil capacity less than or equal to 155,000 bpcd. Where appropriate, the employee and crude oil capacity criteria for such refiners will be based on the most recent 12 months...
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Sykes, Ian; Hurst, Jonathan; Vandenberghe, Francois; Weil, Jeffrey; Bieberbach, George, Jr.; Parker, Steve; Cabell, Ryan
2011-05-01
Chemical and biological (CB) agent detection and effective use of these observations in hazard assessment models are key elements of our nation's CB defense program that seeks to ensure that Department of Defense (DoD) operations are minimally affected by a CB attack. Accurate hazard assessments rely heavily on the source term parameters necessary to characterize the release in the transport and dispersion (T&D) simulation. Unfortunately, these source parameters are often not known and based on rudimentary assumptions. In this presentation we describe an algorithm that utilizes variational data assimilation techniques to fuse CB and meteorological observations to characterize agent release source parameters and provide a refined hazard assessment. The underlying algorithm consists of a combination of modeling systems, including the Second order Closure Integrated PUFF model (SCIPUFF), its corresponding Source Term Estimation (STE) model, a hybrid Lagrangian-Eulerian Plume Model (LEPM), its formal adjoint, and the software infrastructure necessary to link them. SCIPUFF and its STE model are used to calculate a "first guess" source estimate. The LEPM and corresponding adjoint are then used to iteratively refine this release source estimate using variational data assimilation techniques. This algorithm has undergone preliminary testing using virtual "single realization" plume release data sets from the Virtual THreat Response Emulation and Analysis Testbed (VTHREAT) and data from the FUSION Field Trials 2007 (FFT07). The end-to-end prototype of this system that has been developed to illustrate its use within the United States (US) Joint Effects Model (JEM) will be demonstrated.
Proving refinement transformations for deriving high-assurance software
Winter, V.L.; Boyle, J.M.
1996-05-01
The construction of a high-assurance system requires some evidence, ideally a proof, that the system as implemented will behave as required. Direct proofs of implementations do not scale up well as systems become more complex and therefore are of limited value. In recent years, refinement-based approaches have been investigated as a means to manage the complexity inherent in the verification process. In a refinement-based approach, a high-level specification is converted into an implementation through a number of refinement steps. The hope is that the proofs of the individual refinement steps will be easier than a direct proof of the implementation. However, if stepwise refinement is performed manually, the number of steps is severely limited, implying that the size of each step is large. If refinement steps are large, then proofs of their correctness will not be much easier than a direct proof of the implementation. The authors describe an approach to refinement-based software development that is based on automatic application of refinements, expressed as program transformations. This automation has the desirable effect that the refinement steps can be extremely small and, thus, easy to prove correct. They give an overview of the TAMPR transformation system that the use for automated refinement. They then focus on some aspects of the semantic framework that they have been developing to enable proofs that TAMPR transformations are correctness preserving. With this framework, proofs of correctness for transformations can be obtained with the assistance of an automated reasoning system.
Level 5: user refinement to aid the fusion process
NASA Astrophysics Data System (ADS)
Blasch, Erik P.; Plano, Susan
2003-04-01
The revised JDL Fusion model Level 4 process refinement covers a broad spectrum of actions such as sensor management and control. A limitation of Level 4 is the
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Efficiency considerations in triangular adaptive mesh refinement.
Behrens, Jörn; Bader, Michael
2009-11-28
Locally or adaptively refined meshes have been successfully applied to simulation applications involving multi-scale phenomena in the geosciences. In particular, for situations with complex geometries or domain boundaries, meshes with triangular or tetrahedral cells demonstrate their superior ability to accurately represent relevant realistic features. On the other hand, these methods require more complex data structures and are therefore less easily implemented, maintained and optimized. Acceptance in the Earth-system modelling community is still low. One of the major drawbacks is posed by indirect addressing due to unstructured or dynamically changing data structures and correspondingly lower efficiency of the related computations. In this paper, we will derive several strategies to circumvent the mentioned efficiency constraint. In particular, we will apply recent computational sciences methods in combination with results of classical mathematics (space-filling curves) in order to linearize the complex data and access structure.
COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO
Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai
2010-02-01
In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.
Visualization of Scalar Adaptive Mesh Refinement Data
VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-12-06
Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
Visualization of adaptive mesh refinement data
NASA Astrophysics Data System (ADS)
Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Joy, Kenneth I.; Ligocki, Terry J.; Ma, Kwan-Liu; Shalf, John M.
2001-05-01
The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.
Formal language theory: refining the Chomsky hierarchy
Jäger, Gerhard; Rogers, James
2012-01-01
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632
GRChombo: Numerical relativity with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Crystal structure refinement from electron diffraction data
Dudka, A. P. Avilov, A. S.; Lepeshov, G. G.
2008-05-15
A procedure of crystal structure refinement from electron diffraction data is described. The electron diffraction data on polycrystalline films are processed taking into account possible overlap of reflections and two-beam interaction. The diffraction from individual single crystals in an electron microscope equipped with a precession attachment is described using the Bloch-wave method, which takes into account multibeam scattering, and a special approach taking into consideration the specific features of the diffraction geometry in the precession technique. Investigations were performed on LiF, NaF, CaF{sub 2}, and Si crystals. A method for reducing experimental data, which allows joint electron and X-ray diffraction study, is proposed.
Formal language theory: refining the Chomsky hierarchy.
Jäger, Gerhard; Rogers, James
2012-07-19
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632
The evolution and refinements of varicocele surgery
Marmar, Joel L
2016-01-01
Varicoceles had been recognized in clinical practice for over a century. Originally, these procedures were utilized for the management of pain but, since 1952, the repairs had been mostly for the treatment of male infertility. However, the diagnosis and treatment of varicoceles were controversial, because the pathophysiology was not clear, the entry criteria of the studies varied among centers, and there were few randomized clinical trials. Nevertheless, clinicians continued developing techniques for the correction of varicoceles, basic scientists continued investigations on the pathophysiology of varicoceles, and new outcome data from prospective randomized trials have appeared in the world's literature. Therefore, this special edition of the Asian Journal of Andrology was proposed to report much of the new information related to varicoceles and, as a specific part of this project, the present article was developed as a comprehensive review of the evolution and refinements of the corrective procedures. PMID:26732111
Visualization Tools for Adaptive Mesh Refinement Data
Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-05-09
Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Adaptive Mesh Refinement Simulations of Relativistic Binaries
NASA Astrophysics Data System (ADS)
Motl, Patrick M.; Anderson, M.; Lehner, L.; Olabarrieta, I.; Tohline, J. E.; Liebling, S. L.; Rahman, T.; Hirschman, E.; Neilsen, D.
2006-09-01
We present recent results from our efforts to evolve relativistic binaries composed of compact objects. We simultaneously solve the general relativistic hydrodynamics equations to evolve the material components of the binary and Einstein's equations to evolve the space-time. These two codes are coupled through an adaptive mesh refinement driver (had). One of the ultimate goals of this project is to address the merger of a neutron star and black hole and assess the possible observational signature of such systems as gamma ray bursts. This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311 and in part through NASA's ATP program grant NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043 and at LSU's Center for Computation & Technology.
GC-directed control improves refining
Hail, G.F. )
1991-02-01
The increasing role of refinery product quality control is significant. Driven not only for meeting product specification and economic goals, refiners must also satisfy new purchaser demands. That is, the emphasis to monitor product quality on-line in an accurate, timely manner is greater now than ever, due largely to the expanding use of statistical methods (SQC/SPC) in analyzing and manipulating process operation. Consequently, the need for reliable composition control is essential in maintaining refinery prosperity. Process gas chromatographs are frequently used to monitor the performance of distillation, absorption and stripping towers by providing near-real-time stream composition, particular component concentration, or calculated parameter (Rvp, Btu content, etc.) information. This paper reports that appreciably greater benefit can be achieved when process gas chromatographs (or GCs) provide on-line feedback data to process control schemes.
Essays on refining markets and environmental policy
NASA Astrophysics Data System (ADS)
Oladunjoye, Olusegun Akintunde
This thesis is comprised of three essays. The first two essays examine empirically the relationship between crude oil price and wholesale gasoline prices in the U.S. petroleum refining industry while the third essay determines the optimal combination of emissions tax and environmental research and development (ER&D) subsidy when firms organize ER&D either competitively or as a research joint venture (RJV). In the first essay, we estimate an error correction model to determine the effects of market structure on the speed of adjustment of wholesale gasoline prices, to crude oil price changes. The results indicate that market structure does not have a strong effect on the dynamics of price adjustment in the three regional markets examined. In the second essay, we allow for inventories to affect the relationship between crude oil and wholesale gasoline prices by allowing them to affect the probability of regime change in a Markov-switching model of the refining margin. We find that low gasoline inventory increases the probability of switching from the low margin regime to the high margin regime and also increases the probability of staying in the high margin regime. This is consistent with the predictions of the competitive storage theory. In the third essay, we extend the Industrial Organization R&D theory to the determination of optimal environmental policies. We find that RJV is socially desirable. In comparison to competitive ER&D, we suggest that regulators should encourage RJV with a lower emissions tax and higher subsidy as these will lead to the coordination of ER&D activities and eliminate duplication of efforts while firms internalize their technological spillover externality.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Global path planning of mobile robots using a memetic algorithm
NASA Astrophysics Data System (ADS)
Zhu, Zexuan; Wang, Fangxiao; He, Shan; Sun, Yiwen
2015-08-01
In this paper, a memetic algorithm for global path planning (MAGPP) of mobile robots is proposed. MAGPP is a synergy of genetic algorithm (GA) based global path planning and a local path refinement. Particularly, candidate path solutions are represented as GA individuals and evolved with evolutionary operators. In each GA generation, the local path refinement is applied to the GA individuals to rectify and improve the paths encoded. MAGPP is characterised by a flexible path encoding scheme, which is introduced to encode the obstacles bypassed by a path. Both path length and smoothness are considered as fitness evaluation criteria. MAGPP is tested on simulated maps and compared with other counterpart algorithms. The experimental results demonstrate the efficiency of MAGPP and it is shown to obtain better solutions than the other compared algorithms.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality with results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
Deformable elastic network refinement for low-resolution macromolecular crystallography.
Schröder, Gunnar F; Levitt, Michael; Brunger, Axel T
2014-09-01
Crystals of membrane proteins and protein complexes often diffract to low resolution owing to their intrinsic molecular flexibility, heterogeneity or the mosaic spread of micro-domains. At low resolution, the building and refinement of atomic models is a more challenging task. The deformable elastic network (DEN) refinement method developed previously has been instrumental in the determinion of several structures at low resolution. Here, DEN refinement is reviewed, recommendations for its optimal usage are provided and its limitations are discussed. Representative examples of the application of DEN refinement to challenging cases of refinement at low resolution are presented. These cases include soluble as well as membrane proteins determined at limiting resolutions ranging from 3 to 7 Å. Potential extensions of the DEN refinement technique and future perspectives for the interpretation of low-resolution crystal structures are also discussed.
Refinement of herpesvirus B-capsid structure on parallel supercomputers.
Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R
1998-01-01
Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.
Parallel adaptive mesh refinement for electronic structure calculations
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1996-12-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
New Process for Grain Refinement of Aluminum. Final Report
Dr. Joseph A. Megy
2000-09-22
A new method of grain refining aluminum involving in-situ formation of boride nuclei in molten aluminum just prior to casting has been developed in the subject DOE program over the last thirty months by a team consisting of JDC, Inc., Alcoa Technical Center, GRAS, Inc., Touchstone Labs, and GKS Engineering Services. The Manufacturing process to make boron trichloride for grain refining is much simpler than preparing conventional grain refiners, with attendant environmental, capital, and energy savings. The manufacture of boride grain refining nuclei using the fy-Gem process avoids clusters, salt and oxide inclusions that cause quality problems in aluminum today.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX.
Janowski, Pawel A; Moriarty, Nigel W; Kelley, Brian P; Case, David A; York, Darrin M; Adams, Paul D; Warren, Gregory L
2016-09-01
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows. PHENIX-AFITT refinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentially difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows. PHENIX-AFITT refinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combining AFITT and the PHENIX software suite on a data set of 189 protein-ligand PDB structures are presented. Refinements using PHENIX-AFITT significantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. For the data presented, PHENIX-AFITT refinements result in more chemically accurate models for small-molecule ligands. PMID:27599738
Refiners react to changes in the pipeline infrastructure
Giles, K.A.
1997-06-01
Petroleum pipelines have long been a critical component in the distribution of crude and refined products in the U.S. Pipelines are typically the most cost efficient mode of transportation for reasonably consistent flow rates. For obvious reasons, inland refineries and consumers are much more dependent on petroleum pipelines to provide supplies of crude and refined products than refineries and consumers located on the coasts. Significant changes in U.S. distribution patterns for crude and refined products are reshaping the pipeline infrastructure and presenting challenges and opportunities for domestic refiners. These changes are discussed.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX
Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; Case, David A.; York, Darrin M.; Adams, Paul D.; Warren, Gregory L.
2016-01-01
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows. PHENIX–AFITT refinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentially difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows. PHENIX–AFITT refinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combining AFITT and the PHENIX software suite on a data set of 189 protein–ligand PDB structures are presented. Refinements using PHENIX–AFITT significantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. For the data presented, PHENIX–AFITT refinements result in more chemically accurate models for small-molecule ligands. PMID:27599738
US refiners choosing variety of routes to produce clean fuels
Ragsdale, R. )
1994-03-21
Passage of the Clean Air Act Amendments of 1990 has prompted US refiners to install new facilities to comply with stricter specifications for gasoline and diesel fuel. Refiners are choosing a number of routes to produce these clean fuels. A roundup of the types of new facilities being built will provide a reference for those refiners who have not yet begun such projects, and an overview of the difficulties U.S. refiners are facing. Only those processing options known to be in design, construction, or operation will be presented.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Refined solution structure of human profilin I.
Metzler, W. J.; Farmer, B. T.; Constantine, K. L.; Friedrichs, M. S.; Lavoie, T.; Mueller, L.
1995-01-01
Profilin is a ubiquitous eukaryotic protein that binds to both cytosolic actin and the phospholipid phosphatidylinositol-4,5-bisphosphate. These dual competitive binding capabilities of profilin suggest that profilin serves as a link between the phosphatidyl inositol cycle and actin polymerization, and thus profilin may be an essential component in the signaling pathway leading to cytoskeletal rearrangement. The refined three-dimensional solution structure of human profilin I has been determined using multidimensional heteronuclear NMR spectroscopy. Twenty structures were selected to represent the solution conformational ensemble. This ensemble of structures has root-mean-square distance deviations from the mean structure of 0.58 A for the backbone atoms and 0.98 A for all non-hydrogen atoms. Comparison of the solution structure of human profilin to the crystal structure of bovine profilin reveals that, although profilin adopts essentially identical conformations in both states, the solution structure is more compact than the crystal structure. Interestingly, the regions that show the most structural diversity are located at or near the actin-binding site of profilin. We suggest that structural differences are reflective of dynamical properties of profilin that facilitate favorable interactions with actin. The global folding pattern of human profilin also closely resembles that of Acanthamoeba profilin I, reflective of the 22% sequence identity and approximately 45% sequence similarity between these two proteins. PMID:7795529
Steel refining with an electrochemical cell
Blander, Milton; Cook, Glenn M.
1988-01-01
Apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Refining and blending of aviation turbine fuels.
White, R D
1999-02-01
Aviation turbine fuels (jet fuels) are similar to other petroleum products that have a boiling range of approximately 300F to 550F. Kerosene and No.1 grades of fuel oil, diesel fuel, and gas turbine oil share many similar physical and chemical properties with jet fuel. The similarity among these products should allow toxicology data on one material to be extrapolated to the others. Refineries in the USA manufacture jet fuel to meet industry standard specifications. Civilian aircraft primarily use Jet A or Jet A-1 fuel as defined by ASTM D 1655. Military aircraft use JP-5 or JP-8 fuel as defined by MIL-T-5624R or MIL-T-83133D respectively. The freezing point and flash point are the principle differences between the finished fuels. Common refinery processes that produce jet fuel include distillation, caustic treatment, hydrotreating, and hydrocracking. Each of these refining processes may be the final step to produce jet fuel. Sometimes blending of two or more of these refinery process streams are needed to produce jet fuel that meets the desired specifications. Chemical additives allowed for use in jet fuel are also defined in the product specifications. In many cases, the customer rather than the refinery will put additives into the fuel to meet their specific storage or flight condition requirements.
Electron beam cold hearth refining in Vallejo
Lowe, J.H.C.
1994-12-31
Electron Beam Cold Hearth Refining Furnace (EBCHR) in Vallejo, California is alive, well, and girding itself for developing new markets. A brief review of the twelve years experience with EBCHR in Vallejo. Acquisition of the Vallejo facility by Axel Johnson Metals, Inc. paves the way for the development of new products and markets. A discussion of some of the new opportunities for the advancement of EBCHR technology. Discussed are advantages to the EBCHR process which include: extended surface area of molten metal exposed to higher vacuum; liberation of insoluble oxide particles to the surface of the melt; higher temperatures that allowed coarse solid particles like carbides and carbonitrides to be suspended in the fluid metal as fine micro-segregates, and enhanced removal of volatile trace impurities like lead, bismuth and cadmium. Future work for the company includes the continued recycling of alloys and also fabricating stainless steel for the piping of chip assembly plants. This is to prevent `killer defects` that ruin a memory chip.
Reitveld refinement study of PLZT ceramics
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Bavbande, D. V.; Mishra, R.; Bafna, V. H.; Mohan, D.; Kothiyal, G. P.
2013-02-01
PLZT ceramics of composition Pb0.93La0.07(Zr0.60Ti0.40)O3, have been milled for 6hrs and 24hrs were prepared by solid state synthesis route. The 6hrs milled and 24hrs milled samples are represented as PLZT-6 and PLZT-24 ceramics respectively. X-ray diffraction (XRD) pattern was recorded at room temperature. The XRD pattern has been analyzed by employing Rietveld refinement method. Phase identification shows that all the peaks observed in PLZT-6 and PLZT-24 ceramics could be indexed to P4mm space group with tetragonal symmetry. The unit cell parameters of 6hrs milled PLZT ceramics are found to be a=b=4.0781(5)Å and c=4.0938(7)Å and for 24hrs milled PLZT ceramics unit cell parameters are a=b=4.0679(4)Å and c=4.1010(5)Å . The axial ratio c/a and unit cell volume of PLZT-6 are 1.0038 and 68.09(2)Å3 respectively. In PLZT-24 samples, the axial ratio c/a value is 1.0080 which is little more than that of the 6hr milled PLZT sample whereas the unit cell volume decrease to 67.88 (1) Å3. An average crystallite size was estimated by using Scherrer's formula. Dielectric properties were obtained by measuring the capacitance and tand loss using Stanford LCR meter.
Astrocytes refine cortical connectivity at dendritic spines
Risher, W Christopher; Patel, Sagar; Kim, Il Hwan; Uezu, Akiyoshi; Bhagat, Srishti; Wilton, Daniel K; Pilaz, Louis-Jan; Singh Alvarado, Jonnathan; Calhan, Osman Y; Silver, Debra L; Stevens, Beth; Calakos, Nicole; Soderling, Scott H; Eroglu, Cagla
2014-01-01
During cortical synaptic development, thalamic axons must establish synaptic connections despite the presence of the more abundant intracortical projections. How thalamocortical synapses are formed and maintained in this competitive environment is unknown. Here, we show that astrocyte-secreted protein hevin is required for normal thalamocortical synaptic connectivity in the mouse cortex. Absence of hevin results in a profound, long-lasting reduction in thalamocortical synapses accompanied by a transient increase in intracortical excitatory connections. Three-dimensional reconstructions of cortical neurons from serial section electron microscopy (ssEM) revealed that, during early postnatal development, dendritic spines often receive multiple excitatory inputs. Immuno-EM and confocal analyses revealed that majority of the spines with multiple excitatory contacts (SMECs) receive simultaneous thalamic and cortical inputs. Proportion of SMECs diminishes as the brain develops, but SMECs remain abundant in Hevin-null mice. These findings reveal that, through secretion of hevin, astrocytes control an important developmental synaptic refinement process at dendritic spines. DOI: http://dx.doi.org/10.7554/eLife.04047.001 PMID:25517933
Refined phase diagram of boron nitride
Solozhenko, V.; Turkevich, V.Z.; Holzapfel, W.B.
1999-04-15
The equilibrium phase diagram of boron nitride thermodynamically calculated by Solozhenko in 1988 has been now refined on the basis of new experimental data on BN melting and extrapolation of heat capacities of BN polymorphs into high-temperature region using the adapted pseudo-Debye model. As compared with the above diagram, the hBN {l_reversible} cBN equilibrium line is displaced by 60 K toward higher temperatures. The hBN-cBN-L triple point has been calculated to be at 3480 {+-} 10 K and 5.9 {+-} 0.1 GPa, while the hBN-L-V triple point is at T = 3400 {+-} 20 K and p = 400 {+-} 20 Pa, which indicates that the region of thermodynamic stability of vapor in the BN phase diagram is extremely small. It has been found that the slope of the cBN melting curve is positive whereas the slope of hBN melting curve varies from positive between ambient pressure and 3.4 GPa to negative at higher pressures.
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1988-05-17
Apparatus is described for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom. 2 figs.
Catalysts for upgrading solvent refined lignite
Kim, N.K.
1982-01-01
The solvent refined lignite (SRL), made at the University of North Dakota Process Development Unit, was a solid having a nominal melting point of 160/sup 0/C. The SRL was pulverized and mixed with a donor solvent, tetralin. The SRL to tetralin ratio of 1:1 was selected to pretreat in a high pressure and temperature reactor. The optimized reactor conditions were a reaction temperature of 475/sup 0/C, an initial hydrogen pressure of 2000 psig and a retention time of 40 minutes. Under these conditions approximately 97% of the SRL was dissolved in tetralin. The resulting solution was used to test the 27 developmental catalysts. The catalysts were developed by impregnating on the ..gamma..-alumina the 3 active metals; MoO/sub 3/, CoO, and WO/sub 3/, each at 3 levels. The effect of these factors on upgrading of the SRL was evaluated in terms of denitrogenation, desulfurization, and hydrocracking. The multiple linear regression analysis showed that the metal compositions for the best overall catalytic performance were 9.5% MoO/sub 3/, 4.3% CoO, and 4% WO/sub 3/ (% of carrier weight). A model was developed based on the results of scanning electron micrographs to explain some of the physical characteristics of the catalysts. The disadvantage of the incipient wetness method used in metal impregnation was explained, and the preferable pore structure and distribution were suggested.
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1985-05-21
Disclosed is an apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Spatially Refined Aerosol Direct Radiative Forcing Efficiencies
NASA Technical Reports Server (NTRS)
Henze, Daven K.; Shindell, Drew Todd; Akhtar, Farhan; Spurr, Robert J. D.; Pinder, Robert W.; Loughlin, Dan; Kopacz, Monika; Singh, Kumaresh; Shim, Changsub
2012-01-01
Global aerosol direct radiative forcing (DRF) is an important metric for assessing potential climate impacts of future emissions changes. However, the radiative consequences of emissions perturbations are not readily quantified nor well understood at the level of detail necessary to assess realistic policy options. To address this challenge, here we show how adjoint model sensitivities can be used to provide highly spatially resolved estimates of the DRF from emissions of black carbon (BC), primary organic carbon (OC), sulfur dioxide (SO2), and ammonia (NH3), using the example of emissions from each sector and country following multiple Representative Concentration Pathway (RCPs). The radiative forcing efficiencies of many individual emissions are found to differ considerably from regional or sectoral averages for NH3, SO2 from the power sector, and BC from domestic, industrial, transportation and biomass burning sources. Consequently, the amount of emissions controls required to attain a specific DRF varies at intracontinental scales by up to a factor of 4. These results thus demonstrate both a need and means for incorporating spatially refined aerosol DRF into analysis of future emissions scenario and design of air quality and climate change mitigation policies.
An Application of the Mesh Generation and Refinement Tool to Mobile Bay, Alabama, USA
NASA Astrophysics Data System (ADS)
Aziz, Wali; Alarcon, Vladimir J.; McAnally, William; Martin, James; Cartwright, John
2009-08-01
A grid generation tool, called the Mesh Generation and Refinement Tool (MGRT), has been developed using Qt4. Qt4 is a comprehensive C++ application framework which includes GUI and container class-libraries and tools for cross-platform development. MGRT is capable of using several types of algorithms for grid generation. This paper presents an application of the MGRT grid generation tool for creating an unstructured grid of Mobile Bay (Alabama, USA) that will be used for hydrodynamics modeling. The algorithm used in this particular application is the Advancing-Front/Local-Reconnection (AFLR) [1] [2]. This research shows results of grids created with MGRT and compares them to grids (for the same geographical container) created using other grid generation tools. The superior quality of the grids generated by MGRT is shown.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements. PMID:27362527
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Refined Freeman-Durden for Harvest Detection using POLSAR data
NASA Astrophysics Data System (ADS)
Taghvakish, Sina
To keep up with an ever increasing human population, providing food is one of the main challenges of the current century. Harvest detection, as an input for decision making, is an important task for food management. Traditional harvest detection methods that rely on field observations need intensive labor, time and money. Therefore, since their introduction in early 60s, optical remote sensing enhanced the process dramatically. But having weaknesses such as cloud cover and temporal resolution, alternative methods were always welcomed. Synthetic Aperture Radar (SAR) on the other hand, with its ability to penetrate cloud cover with the addition of full polarimetric observations could be a good source of data for exploration in agricultural studies. SAR has been used successfully for harvest detection in rice paddy fields. However, harvest detection for other crops without a smooth underlying water surface is much more difficult. The objective of this project is to find a fully-automated algorithm to perform harvest detection using POLSAR image data for soybean and corn. The proposed method is a fusion of Freeman-Durden and H/A/alphadecompositions. The Freeman-Durden algorithm is a decomposition based on three-component physical scattering model. On the other hand, the H/A/alpha parameters are mathematical parameters used to define a three-dimensional space that may be subdivided with scattering mechanism interpretations. The Freeman-Durden model has a symmetric formulation for two of its three scattering mechanisms. On the other hand the surface scattering component used by Freeman-Durden model is only applicable to Bragg surface scattering fields which are not the dominant case in agricultural fields. H/A/alpha can contribute to both of these issues. Based on the RADARSAT-2 images incidence angle, our field based refined Freeman-Durden model and a proposed roughness measure aims to discriminate harvested from senesced crops. We achieved 99.08 percent overall
Evaluation of the tool "Reg Refine" for user-guided deformable image registration.
Johnson, Perry B; Padgett, Kyle R; Chen, Kuan L; Dogan, Nesrin
2016-01-01
"Reg Refine" is a tool available in the MIM Maestro v6.4.5 platform (www.mim-software.com) that allows the user to actively participate in the deformable image registration process. The purpose of this work was to evaluate the efficacy of this tool and investigate strategies for how to apply it effectively. This was done by performing DIR on two publicly available ground-truth models, the Pixel-based Breathing Thorax Model (POPI) for lung, and the Deformable Image Registration Evaluation Project (DIREP) for head and neck. Image noise matched in both magnitude and texture to clinical CBCT scans was also added to each model to simulate the use case of CBCT-CT alignment. For lung, the results showed Reg Refine effective at improving registration accuracy when controlled by an expert user within the context of large lung deformation. CBCT noise was also shown to have no effect on DIR performance while using the MIM algorithm for this site. For head and neck, the results showed CBCT noise to have a large effect on the accuracy of registration, specifically for low-contrast structures such as the brain-stem and parotid glands. In these cases, the Reg Refine tool was able to improve the registration accuracy when controlled by an expert user. Several strategies for how to achieve these results have been outlined to assist other users and provide feedback for developers of similar tools. PMID:27167273
Schnieders, Michael J; Fenn, Timothy D; Pande, Vijay S
2011-04-12
Refinement of macromolecular models from X-ray crystallography experiments benefits from prior chemical knowledge at all resolutions. As the quality of the prior chemical knowledge from quantum or classical molecular physics improves, in principle so will resulting structural models. Due to limitations in computer performance and electrostatic algorithms, commonly used macromolecules X-ray crystallography refinement protocols have had limited support for rigorous molecular physics in the past. For example, electrostatics is often neglected in favor of nonbonded interactions based on a purely repulsive van der Waals potential. In this work we present advanced algorithms for desktop workstations that open the door to X-ray refinement of even the most challenging macromolecular data sets using state-of-the-art classical molecular physics. First we describe theory for particle mesh Ewald (PME) summation that consistently handles the symmetry of all 230 space groups, replicates of the unit cell such that the minimum image convention can be used with a real space cutoff of any size and the combination of space group symmetry with replicates. An implementation of symmetry accelerated PME for the polarizable atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field is presented. Relative to a single CPU core performing calculations on a P1 unit cell, our AMOEBA engine called Force Field X (FFX) accelerates energy evaluations by more than a factor of 24 on an 8-core workstation with a Tesla GPU coprocessor for 30 structures that contain 240 000 atoms on average in the unit cell. The benefit of AMOEBA electrostatics evaluated with PME for macromolecular X-ray crystallography refinement is demonstrated via rerefinement of 10 crystallographic data sets that range in resolution from 1.7 to 4.5 Å. Beginning from structures obtained by local optimization without electrostatics, further optimization using AMOEBA with PME electrostatics improved
Optimization of Refining Craft for Vegetable Insulating Oil
NASA Astrophysics Data System (ADS)
Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan
2016-05-01
Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.
40 CFR 80.103 - Registration of refiners and importers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Registration of refiners and importers. 80.103 Section 80.103 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... and importers. Any refiner or importer of conventional gasoline must register with the...
Refining and End Use Study of Coal Liquids
1997-10-01
This report summarizes revisions to the design basis for the linear programing refining model that is being used in the Refining and End Use Study of Coal Liquids. This revision primarily reflects the addition of data for the upgrading of direct coal liquids.
Trends in catalysis research to meet future refining needs
Absi-Halabi, M.; Stanislaus, A.; Qabazard, H.
1997-02-01
The main emphasis of petroleum refining during the `70s and early `80s was to maximize conversion of heavy oils to gasoline and middle distillate products. While this objective is still important, the current focus that began in the late `80s is to develop cleaner products. This is a result of strict environmental constraints to reduce emissions from both the products and refineries. Developing catalysts with improved activity, selectivity and stability for use in processes producing such environmentally acceptable fuels is the most economical and effective route for refiners. Novel technologies such as biocatalysis and catalytic membranes are examples of current successful laboratory-scale attempts to resolve anticipated future industry problems. Since catalysts play a key role in refining processes, it is important to examine the challenges facing catalysis research to meet future refining developments. The paper discusses the factors influencing refining, advancements in refining technology and catalysis, short-term future trends in refining catalysts research, and long-term trends in refining catalysts. 56 refs.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve
2012-01-01
Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).
Refining primary lead by granulation-leaching-electrowinning
NASA Astrophysics Data System (ADS)
Ojebuoboh, F.; Wang, S.; Maccagni, M.
2003-04-01
This article describes the development of a new process in which lead bullion obtained from smelting concentrates is refined by leaching-electrowinning. In the last half century, the challenge to treat and refine lead in order to minimize emissions of lead and lead compounds has intensified. Within the primary lead industry, the treatment aspect has transformed from the sinter-blast furnace model to direct smelting, creating gains in hygiene, environmental control, and efficiency. The refining aspect has remained based on kettle refining, or to a lesser extent, the Betts electrolytic refining. In the mid-1990s, Asarco investigated a concept based on granulating the lead bullion from the blast furnace. The granular material was fed into the Engitec Fluobor process. This work resulted in the operation of a 45 kg/d pilot plant that could produce lead sheets of 99.9% purity.
Protein structure refinement with adaptively restrained homologous replicas.
Della Corte, Dennis; Wildberg, André; Schröder, Gunnar F
2016-09-01
A novel protein refinement protocol is presented which utilizes molecular dynamics (MD) simulations of an ensemble of adaptively restrained homologous replicas. This approach adds evolutionary information to the force field and reduces random conformational fluctuations by coupling of several replicas. It is shown that this protocol refines the majority of models from the CASP11 refinement category and that larger conformational changes of the starting structure are possible than with current state of the art methods. The performance of this protocol in the CASP11 experiment is discussed. We found that the quality of the refined model is correlated with the structural variance of the coupled replicas, which therefore provides a good estimator of model quality. Furthermore, some remarkable refinement results are discussed in detail. Proteins 2016; 84(Suppl 1):302-313. © 2015 Wiley Periodicals, Inc. PMID:26441154
Image denoising filter based on patch-based difference refinement
NASA Astrophysics Data System (ADS)
Park, Sang Wook; Kang, Moon Gi
2012-06-01
In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.
NASA Astrophysics Data System (ADS)
Feng, F.; Zhu, J.; Zhang, A.
2005-07-01
The structural parameters of La[0.67]Ca[0.33]MnO[3] were refined using one-dimensional HOLZ intensities by the QCBED method. It is feasible to obtain reliable structure information by this method and the global optimization algorithm.
Nonlinear Global Optimization Using Curdling Algorithm
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
An algorithm for segmenting polarimetric SAR imagery
NASA Astrophysics Data System (ADS)
Geaga, Jorge V.
2015-05-01
We have developed an algorithm for segmenting fully polarimetric single look TerraSAR-X, multilook SIR-C and 7 band Landsat 5 imagery using neural nets. The algorithm uses a feedforward neural net with one hidden layer to segment different surface classes. The weights are refined through an iterative filtering process characteristic of a relaxation process. Features selected from studies of fully polarimetric complex single look TerraSAR-X data and multilook SIR-C data are used as input to the net. The seven bands from Landsat 5 data are used as input for the Landsat neural net. The Cloude-Pottier incoherent decomposition is used to investigate the physical basis of the polarimetric SAR data segmentation. The segmentation of a SIR-C ocean surface scene into four classes is presented. This segmentation algorithm could be a very useful tool for investigating complex polarimetric SAR phenomena.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Development, refinement, and testing of a short term solar flare prediction algorithm
NASA Technical Reports Server (NTRS)
Smith, Jesse B., Jr.
1993-01-01
Progress toward performance of the tasks and accomplishing the goals set forth in the two year Research Grant included primarily analysis of digital data sets and determination of methodology associated with the analysis of the very large, unique, and complex collection of digital solar magnetic field data. The treatment of each magnetogram as a unique set of data requiring special treatment was found to be necessary. It is determined that a person familiar with the data, the analysis system, and logical, coherent outcome of the analysis must conduct each analysis, and interact with the analysis program(s) significantly sometimes many iterations for successful calibration and analysis of the data set. With this interaction, the data sets yield valuable, coherent analyses. During this period, it was also decided that only data sets taken inside heliographic longitudes (Central Meridian Distance) East and West 30 degrees (within 30 degrees of the Central Meridian of the Sun). If the total data set is then found to be numerically inadequate for the final analysis, 30 - 45 degrees Central Meridian Distance data will then be analyzed. The Optical Data storage system (MSFC observatory) was found appropriate for use both in intermediate storage of the data (preliminary to analysis), and for storage of the analyzed data sets for later parametric extraction.
REFMAC5 for the refinement of macromolecular crystal structures
Murshudov, Garib N.; Skubák, Pavol; Lebedev, Andrey A.; Pannu, Navraj S.; Steiner, Roberto A.; Nicholls, Robert A.; Winn, Martyn D.; Long, Fei; Vagin, Alexei A.
2011-01-01
This paper describes various components of the macromolecular crystallographic refinement program REFMAC5, which is distributed as part of the CCP4 suite. REFMAC5 utilizes different likelihood functions depending on the diffraction data employed (amplitudes or intensities), the presence of twinning and the availability of SAD/SIRAS experimental diffraction data. To ensure chemical and structural integrity of the refined model, REFMAC5 offers several classes of restraints and choices of model parameterization. Reliable models at resolutions at least as low as 4 Å can be achieved thanks to low-resolution refinement tools such as secondary-structure restraints, restraints to known homologous structures, automatic global and local NCS restraints, ‘jelly-body’ restraints and the use of novel long-range restraints on atomic displacement parameters (ADPs) based on the Kullback–Leibler divergence. REFMAC5 additionally offers TLS parameterization and, when high-resolution data are available, fast refinement of anisotropic ADPs. Refinement in the presence of twinning is performed in a fully automated fashion. REFMAC5 is a flexible and highly optimized refinement package that is ideally suited for refinement across the entire resolution spectrum encountered in macromolecular crystallography. PMID:21460454
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Querying genomic databases: refining the connectivity map.
Segal, Mark R; Xiong, Hao; Bengtsson, Henrik; Bourgon, Richard; Gentleman, Robert
2012-01-01
constitutes an ordered list. These involve using metrics proposed for analyzing partially ranked data, these being of interest in their own right and not widely used. Secondly, we advance an alternate inferential approach based on generating empirical null distributions that exploit the scope, and capture dependencies, embodied by the database. Using these refinements we undertake a comprehensive re-evaluation of Connectivity Map findings that, in general terms, reveal that accommodating ordered queries is less critical than the mode of inference. PMID:22499690
Unsupervised motion-based object segmentation refined by color
NASA Astrophysics Data System (ADS)
Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris
2003-06-01
chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known
GRAIL Refinements to Lunar Seismic Structure
NASA Technical Reports Server (NTRS)
Weber, Renee; Gernero, Edward; Lin, Pei-Ying; Thorne, Michael; Schmerr, Nicholas; Han, Shin-Chan
2012-01-01
such as moonquake location, timing errors, and potential seismic heterogeneities. In addition, the modeled velocities may vary with a 1-to-1 trade ]off with the modeled reflector depth. The GRAIL (Gravity Recovery and Interior Laboratory) mission, launched in Sept. 2011, placed two nearly identical spacecraft in lunar orbit. The two satellites make extremely high-resolution measurements of the lunar gravity field, which can be used to constrain the interior structure of the Moon using a "crust to core" approach. GRAIL fs constraints on crustal thickness, mantle structure, core radius and stratification, and core state (solid vs. molten) will complement seismic investigations in several ways. Here we present a progress report on our efforts to advance our knowledge of the Moon fs internal structure using joint gravity and seismic analyses. We will focus on methodology, including 1) refinements to the seismic core constraint accomplished through array processing of Apollo seismic data, made by applying a set of travel time corrections based on GRAIL structure estimates local to each Apollo seismic station; 2) modeling deep lunar structure through synthetic seismograms, to test whether the seismic core model can reproduce the core reflections observed in the Apollo seismograms; and 3) a joint seismic and gravity inversion in which we attempt to fit a family of seismic structure models with the gravity constraints from GRAIL, resulting in maps of seismic velocities and densities that vary from a nominal model both laterally and with depth.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Rack gasoline and refining margins - wanted: a summer romance
Not Available
1988-04-13
For the first time since late 1987, apparent refining margins on the US benchmark crude oil (based on spot purchase prices) are virtually zero. This felicitous bit of news comes loaded with possibilities of positive (maybe even good.) margins in coming months, if the differential between crude buying prices and the value of the refined barrel continues to improve. What refiners in the US market are watching most closely right now are motorists. This issue also contains the following: (1) ED refining netback data for the US Gulf and Western Coasts, Rotterdam, and Singapore, prices for early April 1988; and (2) ED fuel price/tax series for countries of the Western Hemisphere, April 1988 edition. 5 figures, 5 tables.
QM/MM X-ray Refinement of Zinc Metalloenzymes
Li, Xue; Hayik, Seth A.; Merz, Kenneth M.
2010-01-01
Zinc metalloenzymes play an important role in biology. However, due to the limitation of molecular force field energy restraints used in X-ray refinement at medium or low resolutions, the precise geometry of the zinc coordination environment can be difficult to distinguish from ambiguous electron density maps. Due to the difficulties involved in defining accurate force fields for metal ions, the QM/MM (Quantum-Mechanical /Molecular-Mechanical) method provides an attractive and more general alternative for the study and refinement of metalloprotein active sites. Herein we present three examples that indicate that QM/MM based refinement yields a superior description of the crystal structure based on R and Rfree values and on the inspection of the zinc coordination environment. It is concluded that QM/MM refinement is a useful general tool for the improvement of the metal coordination sphere in metalloenzyme active sites. PMID:20116858
Gulf Coast refiners gain access to more California crudes
Vautrain, J.H.; Sanderson, W.J.
1988-07-11
Refiners east of the Rockies, particularly Gulf Coast refiners, have gained access to easter and central California crudes with the opening of Celeron Corp.'s All American Pipeline (AAPL). Currently, AAPL is carrying a blend of California crudes with properties similar to Alaskan North Slope (ANS). Although the blend is moderate gravity and sulfur content, it is comprised of crudes from several fields in California that display wide variations in quality. Future deliveries east from California will be from regions with even more extremes of quality. To familiarize refiners with the crudes that will become available, some of the properties of these California crudes are discussed, along with some of the problems refiners may encounter in processing these materials.
A Novel Method to Achieve Grain Refinement in Aluminum
NASA Astrophysics Data System (ADS)
Wang, Kui; Jiang, Haiyan; Wang, QuDong; Ye, Bing; Ding, Wenjiang
2016-10-01
A significant grain refinement of pure aluminum is achieved upon addition of TiCN nanoparticles (NPs). Unlike the conventional inoculation, NPs can induce the physical growth restriction through the formation of NP layer on the growing grain surface. An analytical model is developed to quantitatively account for the NP effects on grain growth. The NP-induced growth control can overcome the inherent limitations of inoculation and shed light on a potential method to achieve grain refinement.
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN ...
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN PROGRESS. "ARM & HAMMER BAKING SODA WAS MADE HERE FOR OVER 50 YEARS AND THEN SHIPPED ACROSS THE STREET TO THE CHURCH & DWIGHT PLANT ON WILLIS AVE. (ON THE RIGHT IN THIS PHOTO). LAYING ON THE GROUND IN FRONT OF C&D BUILDING IS PART OF AN RBC DRYING TOWER. - Solvay Process Company, Refined Bicarbonate Building, Between Willis & Milton Avenues, Solvay, Onondaga County, NY
Interface Conditions for Wave Propagation Through Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Choi, Dae-II; Brown, J. David; Imbiriba, Breno; Centrella, Joan; MacNeice, Peter
2002-01-01
We study the propagation of waves across fixed mesh refinement boundaries in linear and nonlinear model equations in 1-D and 2-D, and in the 3-D Einstein equations of general relativity. We demonstrate that using linear interpolation to set the data in guard cells leads to the production of reflected waves at the refinement boundaries. Implementing quadratic interpolation to fill the guard cells eliminates these spurious signals.
Interface conditions for wave propagation through mesh refinement boundaries
NASA Astrophysics Data System (ADS)
Choi, Dae-Il; David Brown, J.; Imbiriba, Breno; Centrella, Joan; MacNeice, Peter
2004-01-01
We study the propagation of waves across fixed mesh refinement boundaries in linear and nonlinear model equations in 1-D and 2-D, and in the 3-D Einstein equations of general relativity. We demonstrate that using linear interpolation to set the data in guard cells leads to the production of reflected waves at the refinement boundaries. Implementing quadratic interpolation to fill the guard cells suppresses these spurious signals.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Lietzke, S. E.; Scavetta, R. D.; Yoder, M. D.; Jurnak, F.
1996-01-01
The crystal structure of pectate lyase E (PelE; EC 4.2.2.2) from the enterobacteria Erwinia chrysanthemi has been refined by molecular dynamics techniques to a resolution of 2.2 A and an R factor (an agreement factor between observed structure factor amplitudes) of 16.1%. The final model consists of all 355 amino acids and 157 water molecules. The root-mean-square deviation from ideality is 0.009 A for bond lengths and 1.721[deg] for bond angles. The structure of PelE bound to a lanthanum ion, which inhibits the enzymatic activity, has also been refined and compared to the metal-free protein. In addition, the structures of pectate lyase C (PelC) in the presence and absence of a lutetium ion have been refined further using an improved algorithm for identifying waters and other solvent molecules. The two putative active site regions of PelE have been compared to those in the refined structure of PelC. The analysis of the atomic details of PelE and PelC in the presence and absence of lanthanide ions provides insight into the enzymatic mechanism of pectate lyases. PMID:12226275
NASA Astrophysics Data System (ADS)
Areias, P.; Rabczuk, T.; de Sá, J. César
2016-09-01
We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.
Effect of refining on quality and composition of sunflower oil.
Pal, U S; Patra, R K; Sahoo, N R; Bakhara, C K; Panda, M K
2015-07-01
An experimental oil refining unit has been developed and tested for sunflower oil. Crude pressed sunflower oil obtained from a local oil mill was refined using chemical method by degumming, neutralization, bleaching and dewaxing. The quality and composition of crude and refined oil were analysed compared. Reduction in phosphorous content from 6.15 ppm to 0, FFA content from 1.1 to 0.24 % (oleic acid), peroxide value from 22.5 to 7.9 meq/kg, wax content from 1,420 to 200 ppm and colour absorbance value from 0.149 to 0.079 (in spectrophotometer at 460 nm) were observed from crude to refined oil. It was observed that refining did not have significant effect on fatty acid compositions as found in the percentage peak area in the GC-MS chromatogram. The percentage of unsaturated fatty acid in both the oils were recorded to be about 95 % containing 9-Octadecenoic acid (Oleic acid) and 11,14-Eicosadienoic acid (elongated form of linoleic acid). The research results will be useful to small entrepreneurs and farmers for refining of sunflower oil for better marketability.
Refinement of herpesvirus B-capsid structure on parallel supercomputers.
Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R
1998-01-01
Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle. PMID:9449358
Effect of refining on quality and composition of sunflower oil.
Pal, U S; Patra, R K; Sahoo, N R; Bakhara, C K; Panda, M K
2015-07-01
An experimental oil refining unit has been developed and tested for sunflower oil. Crude pressed sunflower oil obtained from a local oil mill was refined using chemical method by degumming, neutralization, bleaching and dewaxing. The quality and composition of crude and refined oil were analysed compared. Reduction in phosphorous content from 6.15 ppm to 0, FFA content from 1.1 to 0.24 % (oleic acid), peroxide value from 22.5 to 7.9 meq/kg, wax content from 1,420 to 200 ppm and colour absorbance value from 0.149 to 0.079 (in spectrophotometer at 460 nm) were observed from crude to refined oil. It was observed that refining did not have significant effect on fatty acid compositions as found in the percentage peak area in the GC-MS chromatogram. The percentage of unsaturated fatty acid in both the oils were recorded to be about 95 % containing 9-Octadecenoic acid (Oleic acid) and 11,14-Eicosadienoic acid (elongated form of linoleic acid). The research results will be useful to small entrepreneurs and farmers for refining of sunflower oil for better marketability. PMID:26139933
Ultrasonic Sensor to Characterize Wood Pulp During Refining
Greenwood, Margaret S.; Panetta, Paul D.; Bond, Leonard J.; McCaw, M. W.
2006-12-22
A novel sensor concept has been developed for measuring the consistency, the degree of refining, the water retention value (WRV), and the consistency of wood pulp during the refining process. The measurement time is less than 5 minutes and the sensor can operate in a slip-stream of the process line or as an at-line instrument. The consistency is obtained from a calibration, in which the attenuation of ultrasound through the pulp suspension is measured as a function of the solids weight percentage. The degree of refining and the WRV are determined from settling measurements. The settling of a pulp suspension (consistency less than 0.5 Wt%) is observed, after the mixer that keeps the pulp uniformly distributed is turned off. The attenuation of ultrasound as a function of time is recorded and these data show a peak, after a certain delay, defined as the “peak time.” The degree of refining increases with the peak time, as demonstrated by measuring pulp samples with different degrees of refining. The WRV can be determined using the relative peak time, defined as the ratio T2/T1, where T1 is an initial value of the peak time and T2 is the value after additional refining. This method offers an additional WRV test for the industry, because the freeness test is not specific for the WRV.
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Parallel Clustering Algorithms for Structured AMR
Gunney, B T; Wissink, A M; Hysom, D A
2005-10-26
We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.
Cherry, Elizabeth M; Greenside, Henry S; Henriquez, Craig S
2003-09-01
A recently developed space-time adaptive mesh refinement algorithm (AMRA) for simulating isotropic one- and two-dimensional excitable media is generalized to simulate three-dimensional anisotropic media. The accuracy and efficiency of the algorithm is investigated for anisotropic and inhomogeneous 2D and 3D domains using the Luo-Rudy 1 (LR1) and FitzHugh-Nagumo models. For a propagating wave in a 3D slab of tissue with LR1 membrane kinetics and rotational anisotropy comparable to that found in the human heart, factors of 50 and 30 are found, respectively, for the speedup and for the savings in memory compared to an algorithm using a uniform space-time mesh at the finest resolution of the AMRA method. For anisotropic 2D and 3D media, we find no reduction in accuracy compared to a uniform space-time mesh. These results suggest that the AMRA will be able to simulate the 3D electrical dynamics of canine ventricles quantitatively for 1 s using 32 1-GHz Alpha processors in approximately 9 h.
Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm
Melander, D.J.; Benzley, S.E.; Tautges, T.J.
1997-06-01
The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.
Projection of Discontinuous Galerkin Variable Distributions During Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Ballesteros, Carlos; Herrmann, Marcus
2012-11-01
Adaptive mesh refinement (AMR) methods decrease the computational expense of CFD simulations by increasing the density of solution cells only in areas of the computational domain that are of interest in that particular simulation. In particular, unstructured Cartesian AMR has several advantages over other AMR approaches, as it does not require the creation of numerous guard-cell blocks, neighboring cell lookups become straightforward, and the hexahedral nature of the mesh cells greatly simplifies the refinement and coarsening operations. The h-refinement from this AMR approach can be leveraged by making use of highly-accurate, but computationally costly methods, such as the Discontinuous Galerkin (DG) numerical method. DG methods are capable of high orders of accuracy while retaining stencil locality--a property critical to AMR using unstructured meshes. However, the use of DG methods with AMR requires the use of special flux and projection operators during refinement and coarsening operations in order to retain the high order of accuracy. The flux and projection operators needed for refinement and coarsening of unstructured Cartesian adaptive meshes using Legendre polynomial test functions will be discussed, and their performance will be shown using standard test cases.
Production and Refining of Magnesium Metal from Turkey Originating Dolomite
NASA Astrophysics Data System (ADS)
Demiray, Yeliz; Yücel, Onuralp
2012-06-01
In this study crown magnesium produced from Turkish calcined dolomite by the Pigeon Process was refined and corrosion tests were applied. By using factsage thermodynamic program metalothermic reduction behavior of magnesium oxide and silicate formation structure during this reaction were investigated. After thermodynamic studies were completed, calcination of dolomite and it's metalothermic reduction at temperatures of 1473 K, 1523 K and within a vacuum (varied from 20 to 200 Pa) and refining of crown magnesium was studied. Different flux compositions consisting of MgCl2, KCl, CaCl2, MgO, CaF2, NaCl, and SiO2 with and without B2O3 additions were selected for the refining process. These tests were carried out at 963 K for 15, 30 and 45 minutes setting time. Considerable amount of iron was transferred into the sludge phase and its amount decreased from 0.08% to 0.027%. This refined magnesium was suitable for the production of various magnesium alloys. As a result of decreasing iron content, minimum corrosion rate of refined magnesium was obtained 2.35 g/m2/day. The results are compared with previous studies.
Refinement performance and mechanism of an Al-50Si alloy
Dai, H.S.; Liu, X.F.
2008-11-15
The microstructure and melt structure of primary silicon particles in an Al-50%Si (wt.%) alloy have been investigated by optical microscopy, scanning electron microscopy, electron probe micro-analysis and a high temperature X-ray diffractometer. The results show that the Al-50Si alloy can be effectively refined by a newly developed Si-20P master alloy, and the melting temperature is crucial to the refinement process. The minimal overheating degree {delta}T{sub min} ({delta}T{sub min} is the difference between the minimal overheating temperature T{sub min} and the liquidus temperature T{sub L}) for good refinement is about 260 deg. C. Primary silicon particles can be refined after adding 0.2 wt.% phosphorus amount at sufficient temperature, and their average size transforms from 2-4 mm to about 30 {mu}m. The X-ray diffraction data of the Al-50Si melt demonstrate that structural change occurs when the melting temperature varies from 1100 deg. C to 1300 deg. C. Additionally, the relationship between the refinement mechanism and the melt structure is discussed.
Adaptive mesh refinement for shocks and material interfaces
Dai, William Wenlong
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Refined food addiction: a classic substance use disorder.
Ifland, J R; Preuss, H G; Marcus, M T; Rourke, K M; Taylor, W C; Burau, K; Jacobs, W S; Kadish, W; Manso, G
2009-05-01
Overeating in industrial societies is a significant problem, linked to an increasing incidence of overweight and obesity, and the resultant adverse health consequences. We advance the hypothesis that a possible explanation for overeating is that processed foods with high concentrations of sugar and other refined sweeteners, refined carbohydrates, fat, salt, and caffeine are addictive substances. Therefore, many people lose control over their ability to regulate their consumption of such foods. The loss of control over these foods could account for the global epidemic of obesity and other metabolic disorders. We assert that overeating can be described as an addiction to refined foods that conforms to the DSM-IV criteria for substance use disorders. To examine the hypothesis, we relied on experience with self-identified refined foods addicts, as well as critical reading of the literature on obesity, eating behavior, and drug addiction. Reports by self-identified food addicts illustrate behaviors that conform to the 7 DSM-IV criteria for substance use disorders. The literature also supports use of the DSM-IV criteria to describe overeating as a substance use disorder. The observational and empirical data strengthen the hypothesis that certain refined food consumption behaviors meet the criteria for substance use disorders, not unlike tobacco and alcohol. This hypothesis could lead to a new diagnostic category, as well as therapeutic approaches to changing overeating behaviors. PMID:19223127
Single-pass GPU-raycasting for structured adaptive mesh refinement data
NASA Astrophysics Data System (ADS)
Kaehler, Ralf; Abel, Tom
2013-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulation due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.
NASA Astrophysics Data System (ADS)
Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.
2016-06-01
Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.
Kalburgi, P B; Jha, R; Ojha, C S P; Deshannavar, U B
2015-01-01
Stream re-aeration is an extremely important component to enhance the self-purification capacity of streams. To estimate the dissolved oxygen (DO) present in the river, estimation of re-aeration coefficient is mandatory. Normally, the re-aeration coefficient is expressed as a function of several stream variables, such as mean stream velocity, shear stress velocity, bed slope, flow depth and Froude number. Many empirical equations have been developed in the last years. In this work, 13 most popular empirical re-aeration equations, used for re-aeration prediction, have been tested for their applicability in Ghataprabha River system, Karnataka, India, at various locations. Extensive field data were collected during the period March 2008 to February 2009 from seven different sites located in the river to observe re-aeration coefficient using mass balance approach. The performance of re-aeration equations have been evaluated using various error estimations, namely, the standard error (SE), mean multiplicative error (MME), normalized mean error (NME) and correlation statistics. The results show that the predictive equation developed by Jha et al. (Refinement of predictive re-aeration equations for a typical Indian river. Hydrological Process. 2001;15(6):1047-1060), for a typical Indian river, yielded the best agreement with the values of SE, MME, NME and correlation coefficient r. Furthermore, a refined predictive equation has been developed for river Ghataprabha using least-squares algorithm that minimizes the error estimates.
NASA Astrophysics Data System (ADS)
Anderson, Robert; Pember, Richard; Elliott, Noah
2001-11-01
We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.
Pillowing doublets: Refining a mesh to ensure that faces share at most one edge
Mitchell, S.A.; Tautges, T.J.
1995-11-01
Occasionally one may be confronted by a hexahedral or quadrilateral mesh containing doublets, two faces sharing two edges. In this case, no amount of smoothing will produce a mesh with agreeable element quality: in the planar case, one of these two faces will always have an angle of at least 180 degrees between the two edges. The authors describe a robust scheme for refining a hexahedral or quadrilateral mesh to separate such faces, so that any two faces share at most one edge. Note that this also ensures that two hexahedra share at most one face in the three dimensional case. The authors have implemented this algorithm and incorporated it into the CUBIT mesh generation environment developed at Sandia National Laboratories.
The US petroleum refining industry in the 1980's
Not Available
1990-10-11
As part of the EIA program on petroleum, The US Petroleum Refining Industry in the 1980's, presents a historical analysis of the changes that took place in the US petroleum refining industry during the 1980's. It is intended to be of interest to analysts in the petroleum industry, state and federal government officials, Congress, and the general public. The report consists of six chapters and four appendices. Included is a detailed description of the major events and factors that affected the domestic refining industry during this period. Some of the changes that took place in the 1980's are the result of events that started in the 1970's. The impact of these events on US refinery configuration, operations, economics, and company ownership are examined. 23 figs., 11 tabs.
Optical measurement of pulp quantity in a rotating disc refiner
NASA Astrophysics Data System (ADS)
Alahautala, Taito; Lassila, Erkki; Hernberg, Rolf; Härkönen, Esko; Vuorio, Petteri
2004-11-01
An optical method based on light extinction was used in measuring pulp quantity in the plate gap of a 10 MW thermomechanical pulping refiner for the first time. The relationship between pulp quantity and light extinction was determined by empirical laboratory experiments. The empirical relationship was then applied to interpret the image data obtained from field measurements. The results show the local distribution of pulp in the refiner plate gap for different rotor plate positions and refiner operation points. The maximum relative uncertainty in the measured pulp quantity was 50%. Relative pulp distributions were measured at higher accuracy. The measurements have influenced the development of a laser-based optical diagnostic method that can be applied to the quantitative visualization of technically demanding industrial processes.
Construction and Application of a Refined Hospital Management Chain.
Lihua, Yi
2016-01-01
Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain
RNA Structure Refinement using the ERRASER-Phenix pipeline
Chou, Fang-Chieh; Echols, Nathaniel; Terwilliger, Thomas C.; Das, Rhiju
2015-01-01
Summary The final step of RNA crystallography involves the fitting of coordinates into electron density maps. The large number of backbone atoms in RNA presents a difficult and tedious challenge, particularly when experimental density is poor. The ERRASER-Phenix pipeline can improve an initial set of RNA coordinates automatically based on a physically realistic model of atomic-level RNA interactions. The pipeline couples diffraction-based refinement in Phenix with the Rosetta-based real-space refinement protocol ERRASER (Enumerative Real-Space Refinement ASsisted by Electron density under Rosetta). The combination of ERRASER and Phenix can improve the geometrical quality of RNA crystallographic models while maintaining or improving the fit to the diffraction data (as measured by Rfree). Here we present a complete tutorial for running ERRASER-Phenix through the Phenix GUI, from the command-line, and via an application in the Rosetta On-line Server that Includes Everyone (ROSIE). PMID:26227049
Steam refining as an alternative to steam explosion.
Schütt, Fokko; Westereng, Bjørge; Horn, Svein J; Puls, Jürgen; Saake, Bodo
2012-05-01
In steam pretreatment the defibration is usually achieved by an explosion at the end of the treatment, but can also be carried out in a subsequent refiner step. A steam explosion and a steam refining unit were compared by using the same raw material and pretreatment conditions, i.e. temperature and time. Smaller particle size was needed for the steam explosion unit to obtain homogenous slurries without considerable amounts of solid chips. A higher amount of volatiles could be condensed from the vapour phase after steam refining. The results from enzymatic hydrolysis showed no significant differences. It could be shown that, beside the chemical changes in the cell wall, the decrease of the particle size is the decisive factor to enhance the enzymatic accessibility while the explosion effect is not required.
Construction and Application of a Refined Hospital Management Chain.
Lihua, Yi
2016-01-01
Large scale development was quite common in the later period of hospital industrialization in China. Today, Chinese hospital management faces such problems as service inefficiency, high human resources cost, and low rate of capital use. This study analyzes the refined management chain of Wuxi No.2 People's Hospital. This consists of six gears namely, "organizational structure, clinical practice, outpatient service, medical technology, and nursing care and logistics." The gears are based on "flat management system targets, chief of medical staff, centralized outpatient service, intensified medical examinations, vertical nursing management and socialized logistics." The core concepts of refined hospital management are optimizing flow process, reducing waste, improving efficiency, saving costs, and taking good care of patients as most important. Keywords: Hospital, Refined, Management chain PMID:27180468
Progressive refinement: more than a means to overcome limited bandwidth
NASA Astrophysics Data System (ADS)
Rosenbaum, René; Schumann, Heidrun
2009-01-01
Progressive refinement is commonly understood as a means to solve problems imposed by limited system resources. In this publication, we apply this technology as a novel approach for information presentation and device adaptation. The progressive refinement is able to handle different kinds of data and consists of innovative ideas to overcome the multiple issues imposed by large data volumes. The key feature is the mature use of multiple incremental previews to the data. This leads to a temporal deskew of the information to be presented and provides a causal flow in terms of a tour-through-the-data. Such a presentation is scalable leading to a significantly simplified adaptation to the available resources, short response times, and reduced visual clutter. Due to its rather beneficial properties and feedback we received from first implementations, we state that there is high potential of progressive refinement far beyond its currently addressed application context.
Segregation Coefficients of Impurities in Selenium by Zone Refining
NASA Technical Reports Server (NTRS)
Su, Ching-Hua; Sha, Yi-Gao
1998-01-01
The purification of Se by zone refining process was studied. The impurity solute levels along the length of a zone-refined Se sample were measured by spark source mass spectrographic analysis. By comparing the experimental concentration levels with theoretical curves the segregation coefficient, defined as the ratio of equilibrium concentration of a given solute in the solid to that in the liquid, k = x(sub s)/x(sub l) for most of the impurities in Se are found to be close to unity, i.e., between 0.85 and 1.15, with the k value for Si, Zn, Fe, Na and Al greater than 1 and that for S, Cl, Ca, P, As, Mn and Cr less than 1. This implies that a large number of passes is needed for the successful implementation of zone refining in the purification of Se.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Zone refining of sintered, microwave-derived YBCO superconductors
Warrier, K.G.K.; Varma, H.K.; Mani, T.V.; Damodaran, A.D.; Balachandran, U.
1993-07-01
Post-sintering treatments such as zone melting under thermal gradient has been conducted on sintered YBCO tape cast films. YBCO precursor powder was derived through decomposition of a mixture of nitrates of cations in a microwave oven for {approx}4 min. The resulting powder was characterized and made into thin sheets by tape casting and then sintered at 945 C for 5 h. The sintered tapes were subjected to repeated zone refining operations at relatively high speeds of {approx}30 mm/h. A microstructure having uniformly oriented grains in the a-b plane throughout the bulk of the sample was obtained by three repeated zone refining operations. Details of precursor preparation, microwave processing and its advantages, zone refining conditions, and microstructural features are presented in this paper.
Automated protein model building combined with iterative structure refinement.
Perrakis, A; Morris, R; Lamzin, V S
1999-05-01
In protein crystallography, much time and effort are often required to trace an initial model from an interpretable electron density map and to refine it until it best agrees with the crystallographic data. Here, we present a method to build and refine a protein model automatically and without user intervention, starting from diffraction data extending to resolution higher than 2.3 A and reasonable estimates of crystallographic phases. The method is based on an iterative procedure that describes the electron density map as a set of unconnected atoms and then searches for protein-like patterns. Automatic pattern recognition (model building) combined with refinement, allows a structural model to be obtained reliably within a few CPU hours. We demonstrate the power of the method with examples of a few recently solved structures.
Alloy performance in high temperature oil refining environments
Sorell, G.; Humphries, M.J.; McLaughlin, J.E.
1995-12-31
The performance of steels and alloys in high temperature petroleum refining applications is strongly influenced by detrimental interactions with aggressive process environments. These are encountered in conventional refining processes and especially in processing schemes for fuels conversion and upgrading. Metal-environment interactions can shorten equipment life and cause impairment of mechanical properties, metallurgical stability and weldability. Corrosion and other high temperature attack modes discussed are sulfidation, hydrogen attack, carburization, and metal dusting. Sulfidation is characterized by bulky scales that are generally ineffective corrosion barriers. Metal loss is often accompanied by sub-surface sulfide penetration. Hydrogen attack and carburization proceed without metal loss and are detectable only by metallographic examination. In advanced stages, these deterioration modes cause severe impairment of mechanical properties. Harmful metal-environment interactions are characterized and illustrated with data drawn from test exposures and plant experience. Alloys employed for high temperature oil refining equipment are identified, including some promising newcomers.
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
Trends in heavy oil production and refining in California
Olsen, D.K.; Ramzel, E.B.; Pendergrass, R.A. II.
1992-07-01
This report is one of a series of publications assessing the feasibility of increasing domestic heavy oil production and is part of a study being conducted for the US Department of Energy. This report summarizes trends in oil production and refining in Canada. Heavy oil (10{degrees} to 20{degrees} API gravity) production in California has increased from 20% of the state's total oil production in the early 1940s to 70% in the late 1980s. In each of the three principal petroleum producing districts (Los Angeles Basin, Coastal Basin, and San Joaquin Valley) oil production has peaked then declined at different times throughout the past 30 years. Thermal production of heavy oil has contributed to making California the largest producer of oil by enhanced oil recovery processes in spite of low oil prices for heavy oil and stringent environmental regulation. Opening of Naval Petroleum Reserve No. 1, Elk Hills (CA) field in 1976, brought about a major new source of light oil at a time when light oil production had greatly declined. Although California is a major petroleum-consuming state, in 1989 the state used 13.3 billion gallons of gasoline or 11.5% of US demand but it contributed substantially to the Nation's energy production and refining capability. California is the recipient and refines most of Alaska's 1.7 million barrel per day oil production. With California production, Alaskan oil, and imports brought into California for refining, California has an excess of oil and refined products and is a net exporter to other states. The local surplus of oil inhibits exploitation of California heavy oil resources even though the heavy oil resources exist. Transportation, refining, and competition in the market limit full development of California heavy oil resources.
Trends in heavy oil production and refining in California
Olsen, D.K.; Ramzel, E.B.; Pendergrass, R.A. II
1992-07-01
This report is one of a series of publications assessing the feasibility of increasing domestic heavy oil production and is part of a study being conducted for the US Department of Energy. This report summarizes trends in oil production and refining in Canada. Heavy oil (10{degrees} to 20{degrees} API gravity) production in California has increased from 20% of the state`s total oil production in the early 1940s to 70% in the late 1980s. In each of the three principal petroleum producing districts (Los Angeles Basin, Coastal Basin, and San Joaquin Valley) oil production has peaked then declined at different times throughout the past 30 years. Thermal production of heavy oil has contributed to making California the largest producer of oil by enhanced oil recovery processes in spite of low oil prices for heavy oil and stringent environmental regulation. Opening of Naval Petroleum Reserve No. 1, Elk Hills (CA) field in 1976, brought about a major new source of light oil at a time when light oil production had greatly declined. Although California is a major petroleum-consuming state, in 1989 the state used 13.3 billion gallons of gasoline or 11.5% of US demand but it contributed substantially to the Nation`s energy production and refining capability. California is the recipient and refines most of Alaska`s 1.7 million barrel per day oil production. With California production, Alaskan oil, and imports brought into California for refining, California has an excess of oil and refined products and is a net exporter to other states. The local surplus of oil inhibits exploitation of California heavy oil resources even though the heavy oil resources exist. Transportation, refining, and competition in the market limit full development of California heavy oil resources.
Diffraction-geometry refinement in the DIALS framework.
Waterman, David G; Winter, Graeme; Gildea, Richard J; Parkhurst, James M; Brewster, Aaron S; Sauter, Nicholas K; Evans, Gwyndaf
2016-04-01
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; Parkhurst, James M.; Brewster, Aaron S.; Sauter, Nicholas K.; Evans, Gwyndaf
2016-01-01
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis on the flexibility and extensibility it entails. PMID:27050135
Laser furnace and method for zone refining of semiconductor wafers
NASA Technical Reports Server (NTRS)
Griner, Donald B. (Inventor); zur Burg, Frederick W. (Inventor); Penn, Wayne M. (Inventor)
1988-01-01
A method of zone refining a crystal wafer (116 FIG. 1) comprising the steps of focusing a laser beam to a small spot (120) of selectable size on the surface of the crystal wafer (116) to melt a spot on the crystal wafer, scanning the small laser beam spot back and forth across the surface of the crystal wafer (116) at a constant velocity, and moving the scanning laser beam across a predetermined zone of the surface of the crystal wafer (116) in a direction normal to the laser beam scanning direction and at a selectible velocity to melt and refine the entire crystal wafer (116).
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... 31, 2006. (viii) Name, address, phone number, facsimile number, and e-mail address of a corporate... approved small refiners. (f) If EPA finds that a refiner provided false or inaccurate information in...
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 1 2013-04-01 2013-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 1 2012-04-01 2012-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 1 2014-04-01 2014-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... dutiable metals entirely lost in smelting or refining, or both), shall constitute the quantity of...
19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 19 Customs Duties 1 2012-04-01 2012-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... dutiable metals entirely lost in smelting or refining, or both), shall constitute the quantity of...
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 1 2014-04-01 2014-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 1 2013-04-01 2013-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... dutiable metals entirely lost in smelting or refining, or both), shall constitute the quantity of...
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 1 2011-04-01 2011-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Friends-of-friends galaxy group finder with membership refinement. Application to the local Universe
NASA Astrophysics Data System (ADS)
Tempel, E.; Kipper, R.; Tamm, A.; Gramann, M.; Einasto, M.; Sepp, T.; Tuvikene, T.
2016-04-01
Context. Groups form the most abundant class of galaxy systems. They act as the principal drivers of galaxy evolution and can be used as tracers of the large-scale structure and the underlying cosmology. However, the detection of galaxy groups from galaxy redshift survey data is hampered by several observational limitations. Aims: We improve the widely used friends-of-friends (FoF) group finding algorithm with membership refinement procedures and apply the method to a combined dataset of galaxies in the local Universe. A major aim of the refinement is to detect subgroups within the FoF groups, enabling a more reliable suppression of the fingers-of-God effect. Methods: The FoF algorithm is often suspected of leaving subsystems of groups and clusters undetected. We used a galaxy sample built of the 2MRS, CF2, and 2M++ survey data comprising nearly 80 000 galaxies within the local volume of 430 Mpc radius to detect FoF groups. We conducted a multimodality check on the detected groups in search for subgroups. We furthermore refined group membership using the group virial radius and escape velocity to expose unbound galaxies. We used the virial theorem to estimate group masses. Results: The analysis results in a catalogue of 6282 galaxy groups in the 2MRS sample with two or more members, together with their mass estimates. About half of the initial FoF groups with ten or more members were split into smaller systems with the multimodality check. An interesting comparison to our detected groups is provided by another group catalogue that is based on similar data but a completely different methodology. Two thirds of the groups are identical or very similar. Differences mostly concern the smallest and largest of these other groups, the former sometimes missing and the latter being divided into subsystems in our catalogue. The catalogues are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc
Mannina, Luisa; D'Imperio, Marco; Capitani, Donatella; Rezzi, Serge; Guillou, Claude; Mavromoustakos, Thomas; Vilchez, María Dolores Molero; Fernández, Antonio Herrera; Thomas, Freddy; Aparicio, Ramon
2009-12-23
A (1)H NMR analytical protocol for the detection of refined hazelnut oils in admixtures with refined olive oils is reported according to ISO format. The main purpose of this research activity is to suggest a novel analytical methodology easily usable by operators with a basic knowledge of NMR spectroscopy. The protocol, developed on 92 oil samples of different origins within the European MEDEO project, is based on (1)H NMR measurements combined with a suitable statistical analysis. It was developed using a 600 MHz instrument and was tested by two independent laboratories on 600 MHz spectrometers, allowing detection down to 10% adulteration of olive oils with refined hazelnut oils. Finally, the potential and limitations of the protocol applied on spectrometers operating at different magnetic fields, that is, at the proton frequencies of 500 and 400 MHz, were investigated.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
A Bayesian Adaptive Basis Algorithm for Single Particle Reconstruction
Kucukelbir, Alp; Sigworth, Fred J.; Tagare, Hemant D.
2012-01-01
Traditional single particle reconstruction methods use either the Fourier or the delta function basis to represent the particle density map. This paper proposes a more flexible algorithm that adaptively chooses the basis based on the data. Because the basis adapts to the data, the reconstruction resolution and signal-to-noise ratio (SNR) is improved compared to a reconstruction with a fixed basis. Moreover, the algorithm automatically masks the particle, thereby separating it from the background. This eliminates the need for ad-hoc filtering or masking in the refinement loop. The algorithm is formulated in a Bayesian maximum-a-posteriori framework and uses an efficient optimization algorithm for the maximization. Evaluations using simulated and actual cryogenic electron microscopy data show resolution and SNR improvements as well as the effective masking of particle from background. PMID:22564910
NASA Technical Reports Server (NTRS)
Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.
1991-01-01
An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.
Properties of Canadian re-refined base oils
Strigner, P.L.
1980-11-01
The Fuels and Lubricants Laboratory of NRC (Canada) has been examining for over 10 years, as a service, the properties of base stocks made by Canadian re-refiners. Nineteen samples of acid/clay processed base stocks from six Canadian re-refiners were examined. When well re-refined, the base stocks have excellent properties including a good response to anti-oxidants and a high degree of cleanliness. Since traces of additives and/or polar compounds do remain, the quality of the base stocks is judged to be slightly inferior to that of comparable virgin refined base stocks. Some suggested specification limits for various properties and some indication of batch-to-batch consistency were obtained. Any usage of the limits should be done with caution, e.g., sulfur, bearing in mind the rapidly changing crude oil picture and engine and machine technology leading to oil products of differing compositions. Certainly modifications are in order; it may even be desirable to have grades of base stocks.
Refinement of a Chemistry Attitude Measure for College Students
ERIC Educational Resources Information Center
Xu, Xiaoying; Lewis, Jennifer E.
2011-01-01
This work presents the evaluation and refinement of a chemistry attitude measure, Attitude toward the Subject of Chemistry Inventory (ASCI), for college students. The original 20-item and revised 8-item versions of ASCI (V1 and V2) were administered to different samples. The evaluation for ASCI had two main foci: reliability and validity. This…
Lactation and neonatal nutrition: Defining and refining the critical questions
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper resulted from a conference entitled "Lactation and Milk: Defining and Refining the Critical Questions" held at the University of Colorado School of Medicine from January 18-20, 2012. The mission of the conference was to identify unresolved questions and set future goals for research into ...
AMR++: Object-Oriented Parallel Adaptive Mesh Refinement
Quinlan, D.; Philip, B.
2000-02-02
Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.
40 CFR 80.1620 - Small refiner definition.
Code of Federal Regulations, 2014 CFR
2014-07-01
... companies, and all joint venture partners. (3) Had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (c) The number of employees and crude oil capacity under... and crude oil capacity of any subsidiary companies, any parent company and subsidiaries of the...
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
Refining the Career Education Concept. Monographs on Career Education.
ERIC Educational Resources Information Center
Hoyt, Kenneth
Six papers prepared within the Office of Career Education during the period 1975-76 are contained in this monograph. The papers are presented in their order of preparation, each intended to make some contribution to refinement of the career education concept. "Career Education: A Crusade for Change" discusses the need for, nature of, and…
A Refined Item Digraph Analysis of a Proportional Reasoning Test.
ERIC Educational Resources Information Center
Bart, William M.; Williams-Morris, Ruth
1990-01-01
Refined item digraph analysis (RIDA) is a way of studying diagnostic and prescriptive testing. It permits assessment of a test item's diagnostic value by examining the extent to which the item has properties of ideal items. RIDA is illustrated with the Orange Juice Test, which assesses the proportionality concept. (TJH)
Refining King and Baxter Magolda's Model of Intercultural Maturity
ERIC Educational Resources Information Center
Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.
2015-01-01
This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…
Refinement and Selection of Near-native Protein Structures
NASA Astrophysics Data System (ADS)
Zhang, Jiong; Zhang, Jingfen; Shang, Yi; Xu, Dong; Kosztin, Ioan
2013-03-01
In recent years in silico protein structure prediction reached a level where a variety of servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of decoys continue to be problematic. To address these issues, we have developed a selective refinement protocol (based on the Rosetta software package), and a molecular dynamics (MD) simulation based ranking method (MDR). The refinement of the selected structures is done by employing Rosetta's relax mode, subject to certain constraints. The selection of the final best models is done with MDR by testing their relative stability against gradual heating during all atom MD simulations. We have implemented the selective refinement protocol and the MDR method in our fully automated server Mufold-MD. Assessments of the performance of the Mufold-MD server in the CASP10 competition and other tests will be presented. This work was supported by grants from NIH. Computer time was provided by the University of Missouri Bioinformatics Consortium.
Singapore refiners in midst of huge construction campaign
Not Available
1992-07-20
This paper reports that Singapore's downstream capacity continues to mushroom. Singapore refiners, upbeat about long term prospects for petroleum products demand in the Asia-Pacific region, and are pressing plans to boost processing capacity. Their plans go beyond capacity expansions. They are proceeding with projects to upgrade refineries to emphasize production of higher value products and to further integrate refining capabilities wit the region's petrochemical industry. Planned expansion and upgrading projects at Singapore refineries call for outlays of more than $1 billion to boost total capacity to about 1.1 million b/d in 1993 and 1.27 million b/d by 1995. That would be the highest level since the mid-1980s, when refiners such as Shell Singapore cut capacity amid an oil glut. Singapore refineries currently are running at effective full capacity of 1.04 million b/d. Meanwhile, Singapore refiners are aggressively courting customers in the Indochina subcontinent, where long isolated centrally planned economies are turning gradually to free markets.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Refining the laparoscopic retroperitoneal lymph node dissection for testicular cancer.
Romero, Frederico R; Wagner, Andrew; Brito, Fabio A; Muntener, Michael; Lima, Guilherme C; Kavoussi, Louis R
2006-01-01
Since its initial description, the laparoscopic retroperitoneal lymph node dissection has evolved considerably, from a purely diagnostic tool performed to stage germ cell testicular cancer to a therapeutic operation that fully duplicates the open technique. Herein, we describe the current technique employed at our institution, along with illustrations of all surgical steps, and delineate the refinements of the technique over time.
Use of intensity quotients and differences in absolute structure refinement.
Parsons, Simon; Flack, Howard D; Wagner, Trixie
2013-06-01
Several methods for absolute structure refinement were tested using single-crystal X-ray diffraction data collected using Cu Kα radiation for 23 crystals with no element heavier than oxygen: conventional refinement using an inversion twin model, estimation using intensity quotients in SHELXL2012, estimation using Bayesian methods in PLATON, estimation using restraints consisting of numerical intensity differences in CRYSTALS and estimation using differences and quotients in TOPAS-Academic where both quantities were coded in terms of other structural parameters and implemented as restraints. The conventional refinement approach yielded accurate values of the Flack parameter, but with standard uncertainties ranging from 0.15 to 0.77. The other methods also yielded accurate values of the Flack parameter, but with much higher precision. Absolute structure was established in all cases, even for a hydrocarbon. The procedures in which restraints are coded explicitly in terms of other structural parameters enable the Flack parameter to correlate with these other parameters, so that it is determined along with those parameters during refinement. PMID:23719469
Research of the thorium purification at monazite refinement processes
NASA Astrophysics Data System (ADS)
Shagalov, V. V.; Sobolev, V. I.; Turinskaya, M. V.; Malin, A. V.
2016-06-01
This paper is aimed to the research of the thorium purification processes at monazite refinement processes. We have investigated different solution containing thorium with different mix of rare-earth elements. It was found that the application of cation resin is well- recommended if we want to reach the highest yields of thorium purification process.
Process for electroslag refining of uranium and uranium alloys
Lewis, P.S. Jr.; Agee, W.A.; Bullock, J.S. IV; Condon, J.B.
1975-07-22
A process is described for electroslag refining of uranium and uranium alloys wherein molten uranium and uranium alloys are melted in a molten layer of a fluoride slag containing up to about 8 weight percent calcium metal. The calcium metal reduces oxides in the uranium and uranium alloys to provide them with an oxygen content of less than 100 parts per million. (auth)
Crisis and Survival in Western European Oil Refining.
ERIC Educational Resources Information Center
Pinder, David A.
1986-01-01
In recent years, oil refining in Western Europe has experienced a period of intense contraction. Discussed are the nature of the crisis, defensive strategies that have been adopted, the spatial consequences of the strategies, and how effective they have been in combatting the root causes of crises. (RM)
Energy Efficiency Improvement in the Petroleum RefiningIndustry
Worrell, Ernst; Galitsky, Christina
2005-05-01
Information has proven to be an important barrier inindustrial energy efficiency improvement. Voluntary government programsaim to assist industry to improve energy efficiency by supplyinginformation on opportunities. ENERGY STAR(R) supports the development ofstrong strategic corporate energy management programs, by providingenergy management information tools and strategies. This paper summarizesENERGY STAR research conducted to develop an Energy Guide for thePetroleum Refining industry. Petroleum refining in the United States isthe largest in the world, providing inputs to virtually every economicsector, including the transport sector and the chemical industry.Refineries spend typically 50 percent of the cash operating costs (e.g.,excluding capital costs and depreciation) on energy, making energy amajor cost factor and also an important opportunity for cost reduction.The petroleum refining industry consumes about 3.1 Quads of primaryenergy, making it the single largest industrial energy user in the UnitedStates. Typically, refineries can economically improve energy efficiencyby 20 percent. The findings suggest that given available resources andtechnology, there are substantial opportunities to reduce energyconsumption cost-effectively in the petroleum refining industry whilemaintaining the quality of the products manufactured.
Nucleation mechanisms of refined alpha microstructure in beta titanium alloys
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
Due to a great combination of physical and mechanical properties, beta titanium alloys have become promising candidates in the field of chemical industry, aerospace and biomedical materials. The microstructure of beta titanium alloys is the governing factor that determines their properties and performances, especially the size scale, distribution and volume fraction of precipitate phase in parent phase matrix. Therefore in order to enhance the performance of beta titanium alloys, it is critical to obtain a thorough understanding of microstructural evolution in beta titanium alloys upon various thermal and/or mechanical processes. The present work is focusing on the study of nucleation mechanisms of refined alpha microstructure and super-refined alpha microstructure in beta titanium alloys in order to study the influence of instabilities within parent phase matrix on precipitates nucleation, including compositional instabilities and/or structural instabilities. The current study is primarily conducted in Ti-5Al-5Mo-5V-3Cr (wt%, Ti-5553), a commercial material for aerospace application. Refined and super-refined precipitates microstructure in Ti-5553 are obtained under specific accurate temperature controlled heat treatments. The characteristics of either microstructure are investigated in details using various characterization techniques, such as SEM, TEM, STEM, HRSTEM and 3D atom probe to describe the features of microstructure in the aspect of morphology, distribution, structure and composition. Nucleation mechanisms of refined and super-refined precipitates are proposed in order to fully explain the features of different precipitates microstructure in Ti-5553. The necessary thermodynamic conditions and detailed process of phase transformations are introduced. In order to verify the reliability of proposed nucleation mechanisms, thermodynamic calculation and phase field modeling simulation are accomplished using the database of simple binary Ti-Mo system
Procedures and computer programs for telescopic mesh refinement using MODFLOW
Leake, Stanley A.; Claar, David V.
1999-01-01
Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.
The use of Fourier reverse transforms in crystallographic phase refinement
Ringrose, S.
1997-10-08
Often a crystallographer obtains an electron density map which shows only part of the structure. In such cases, the phasing of the trial model is poor enough that the electron density map may show peaks in some of the atomic positions, but other atomic positions are not visible. There may also be extraneous peaks present which are not due to atomic positions. A method for determination of crystal structures that have resisted solution through normal crystallographic methods has been developed. PHASER is a series of FORTRAN programs which aids in the structure solution of poorly phased electron density maps by refining the crystallographic phases. It facilitates the refinement of such poorly phased electron density maps for difficult structures which might otherwise not be solvable. The trial model, which serves as the starting point for the phase refinement, may be acquired by several routes such as direct methods or Patterson methods. Modifications are made to the reverse transform process based on several assumptions. First, the starting electron density map is modified based on the fact that physically the electron density map must be non-negative at all points. In practice a small positive cutoff is used. A reverse Fourier transform is computed based on the modified electron density map. Secondly, the authors assume that a better electron density map will result by using the observed magnitudes of the structure factors combined with the phases calculated in the reverse transform. After convergence has been reached, more atomic positions and less extraneous peaks are observed in the refined electron density map. The starting model need not be very large to achieve success with PHASER; successful phase refinement has been achieved with a starting model that consists of only 5% of the total scattering power of the full molecule. The second part of the thesis discusses three crystal structure determinations.
Hirshfeld atom refinement for modelling strong hydrogen bonds.
Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon
2014-09-01
High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.
Copps, Kevin D.; Carnes, Brian R.
2008-04-01
We examine algorithms for the finite element approximation of thermal contact models. We focus on the implementation of thermal contact algorithms in SIERRA Mechanics. Following the mathematical formulation of models for tied contact and resistance contact, we present three numerical algorithms: (1) the multi-point constraint (MPC) algorithm, (2) a resistance algorithm, and (3) a new generalized algorithm. We compare and contrast both the correctness and performance of the algorithms in three test problems. We tabulate the convergence rates of global norms of the temperature solution on sequentially refined meshes. We present the results of a parameter study of the effect of contact search tolerances. We outline best practices in using the software for predictive simulations, and suggest future improvements to the implementation.
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
Refinement, Validation and Application of Cloud-Radiation Parameterization in a GCM
Dr. Graeme L. Stephens
2009-04-30
The research performed under this award was conducted along 3 related fronts: (1) Refinement and assessment of parameterizations of sub-grid scale radiative transport in GCMs. (2) Diagnostic studies that use ARM observations of clouds and convection in an effort to understand the effects of moist convection on its environment, including how convection influences clouds and radiation. This aspect focuses on developing and testing methodologies designed to use ARM data more effectively for use in atmospheric models, both at the cloud resolving model scale and the global climate model scale. (3) Use (1) and (2) in combination with both models and observations of varying complexity to study key radiation feedback Our work toward these objectives thus involved three corresponding efforts. First, novel diagnostic techniques were developed and applied to ARM observations to understand and characterize the effects of moist convection on the dynamical and thermodynamical environment in which it occurs. Second, an in house GCM radiative transfer algorithm (BUGSrad) was employed along with an optimal estimation cloud retrieval algorithm to evaluate the ability to reproduce cloudy-sky radiative flux observations. Assessments using a range of GCMs with various moist convective parameterizations to evaluate the fidelity with which the parameterizations reproduce key observable features of the environment were also started in the final year of this award. The third study area involved the study of cloud radiation feedbacks and we examined these in both cloud resolving and global climate models.
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Headd, Jeffrey J.; Echols, Nathaniel; Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Chen, Vincent B.; Moriarty, Nigel W.; Richardson, David C.; Richardson, Jane S.; Adams, Paul D.
2012-01-01
Traditional methods for macromolecular refinement often have limited success at low resolution (3.0–3.5 Å or worse), producing models that score poorly on crystallographic and geometric validation criteria. To improve low-resolution refinement, knowledge from macromolecular chemistry and homology was used to add three new coordinate-restraint functions to the refinement program phenix.refine. Firstly, a ‘reference-model’ method uses an identical or homologous higher resolution model to add restraints on torsion angles to the geometric target function. Secondly, automatic restraints for common secondary-structure elements in proteins and nucleic acids were implemented that can help to preserve the secondary-structure geometry, which is often distorted at low resolution. Lastly, we have implemented Ramachandran-based restraints on the backbone torsion angles. In this method, a ϕ,ψ term is added to the geometric target function to minimize a modified Ramachandran landscape that smoothly combines favorable peaks identified from nonredundant high-quality data with unfavorable peaks calculated using a clash-based pseudo-energy function. All three methods show improved MolProbity validation statistics, typically complemented by a lowered R free and a decreased gap between R work and R free. PMID:22505258
Semioptimal practicable algorithmic cooling
NASA Astrophysics Data System (ADS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation of 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54±0.75 mm prior to refinement vs. 1.11±0.43 mm post-refinement, p≪0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction was about 2 min per case. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the
Unsupervised classification algorithm based on EM method for polarimetric SAR images
NASA Astrophysics Data System (ADS)
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Refinement of the Retinogeniculate Synapse by Bouton Clustering
Hong, Y. Kate; Park, SuHong; Litvina, Elizabeth Y.; Morales, Jose; Sanes, Joshua R.; Chen, Chinfei
2014-01-01
Mammalian sensory circuits become refined over development in an activity-dependent manner. Retinal ganglion cell (RGC) axons from each eye first map to their target in the geniculate and then segregate into eye-specific layers by the removal and addition of axon branches. Once segregation is complete, robust functional remodeling continues as the number of afferent inputs to each geniculate neuron decreases from many to a few. It is widely assumed that large-scale axon retraction underlies this later phase of circuit refinement. On the contrary, RGC axons remain stable during functional pruning. Instead, presynaptic boutons grow in size and cluster during this process. Moreover, they exhibit dynamic spatial reorganization in response to sensory experience. Surprisingly, axon complexity decreases only after the completion of the thalamic critical period. Therefore, dynamic bouton redistribution along a broad axon backbone represents an unappreciated form of plasticity underlying developmental wiring and rewiring in the central nervous system. PMID:25284005
Grain refinement of high strength steels to improve cryogenic toughness
NASA Technical Reports Server (NTRS)
Rush, H. F.
1985-01-01
Grain-refining techniques using multistep heat treatments to reduce the grain size of five commercial high-strength steels were investigated. The goal of this investigation was to improve the low-temperature toughness as measured by Charpy V-notch impact test without a significant loss in tensile strength. The grain size of four of five alloys investigated was successfully reduced up to 1/10 of original size or smaller with increases in Charpy impact energy of 50 to 180 percent at -320 F. Tensile properties were reduced from 0 to 25 percent for the various alloys tested. An unexpected but highly beneficial side effect from grain refining was improved machinability.
Segmental Refinement: A Multigrid Technique for Data Locality
Adams, Mark
2014-10-27
We investigate a technique - segmental refinement (SR) - proposed by Brandt in the 1970s as a low memory multigrid method. The technique is attractive for modern computer architectures because it provides high data locality, minimizes network communication, is amenable to loop fusion, and is naturally highly parallel and asynchronous. The network communication minimization property was recognized by Brandt and Diskin in 1994; we continue this work by developing a segmental refinement method for a finite volume discretization of the 3D Laplacian on massively parallel computers. An understanding of the asymptotic complexities, required to maintain textbook multigrid efficiency, are explored experimentally with a simple SR method. A two-level memory model is developed to compare the asymptotic communication complexity of a proposed SR method with traditional parallel multigrid. Performance and scalability are evaluated with a Cray XC30 with up to 64K cores. We achieve modest improvement in scalability from traditional parallel multigrid with a simple SR implementation.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Measuring coalition functioning: refining constructs through factor analysis.
Brown, Louis D; Feinberg, Mark E; Greenberg, Mark T
2012-08-01
Internal and external coalition functioning is an important predictor of coalition success that has been linked to perceived coalition effectiveness, coalition goal achievement, coalition ability to support evidence-based programs, and coalition sustainability. Understanding which aspects of coalition functioning best predict coalition success requires the development of valid measures of empirically unique coalition functioning constructs. The goal of the present study is to examine and refine the psychometric properties of coalition functioning constructs in the following six domains: leadership, interpersonal relationships, task focus, participation benefits/costs, sustainability planning, and community support. The authors used factor analysis to identify problematic items in our original measure and then piloted new items and scales to create a more robust, psychometrically sound, multidimensional measure of coalition functioning. Scales displayed good construct validity through correlations with other measures. Discussion considers the strengths and weaknesses of the refined instrument. PMID:22193112
A novel application of theory refinement to student modeling
Baffes, P.T.; Mooney, R.J.
1996-12-31
Theory refinement systems developed in machine learning automatically modify a knowledge base to render it consistent with a set of classified training examples. We illustrate a novel application of these techniques to the problem of constructing a student model for an intelligent tutoring system (ITS). Our approach is implemented in an ITS authoring system called ASSERT which uses theory refinement to introduce errors into an initially correct knowledge base so that it models incorrect student behavior. The efficacy of the approach has been demonstrated by evaluating a tutor developed with ASSERT with 75 students tested on a classification task covering concepts from an introductory course on the C{sup ++} programming language. The system produced reasonably accurate models and students who received feedback based on these models performed significantly better on a post test than students who received simple reteaching.
Worldwide refining capacity at 75 million b/d level
Rhodes, A.K.
1991-12-23
While worldwide crude distillation capacity held essentially flat in 1991, there was solid growth in the Asia/Pacific region both in distillation and octane and conversion capacity. Refinery-by-refinery surveys of both U.S. and worldwide capacity appear together. The U.S. refining industry is apparently in a holding pattern, according to the survey, but it is still the largest and most complex refining industry in the world. It is clear, however, that Western European refineries are beginning to pay more attention to the quality of gasoline and its production. This paper indirectly reflects the obsolescence of much of the communist block technology with its weak conversion and upgrading capability.
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Venezuela's stake in US refining may grow: xenophobia addressed
Not Available
1987-09-23
Is this an invasion of U.S. oil industry sovereignty, or a happy marriage of upstream and downstream between US and foreign interests. Venezuela, a founding member of the Organization of Petroleum Exporting Countries who has also been a chief supplier to the US during times of peace and war, now owns half of two important US refining and marketing organizations. Many US marketers have felt uneasy about this foreign penetration of their turf. In this issue, for the sake of public information, the entire policy statement from the leader of that Venezuelan market strategy is provided. This issue also contains the following: (1) ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore as of late September, 1987; and (2) ED fuel price/tax series for countries of the Eastern Hemisphere, Sept. 19 edition. 4 figures, 6 tables.
Post-refinement multiscale method for pin power reconstruction
Collins, B.; Seker, V.; Downar, T.; Xu, Y.
2012-07-01
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques based on diffusion theory and pin power reconstruction (PPR). The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is 'post-refinement' and thus has no impact on the global solution. (authors)
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
JT9D ceramic outer air seal system refinement program
NASA Technical Reports Server (NTRS)
Gaffin, W. O.
1982-01-01
The abradability and durability characteristics of the plasma sprayed system were improved by refinement and optimization of the plasma spray process and the metal substrate design. The acceptability of the final seal system for engine testing was demonstrated by an extensive rig test program which included thermal shock tolerance, thermal gradient, thermal cycle, erosion, and abradability tests. An interim seal system design was also subjected to 2500 endurance test cycles in a JT9D-7 engine.
Application of local mesh refinement in the DSMC method
NASA Astrophysics Data System (ADS)
Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.
2001-08-01
The implementation of an adaptive mesh embedding (h-refinement) schemes using unstructured grid in two-dimensional Direct Simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new meshes where local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging mode, by introducing anisotropic refinement in the interfacial cells. This is completed by simply connect the hanging node(s) with the other non-hanging node(s) in the non-refined, interfacial cells. In contrast, this remedy increases negligible amount of work; however, it removes all the difficulties presented in the first scheme with hanging nodes. We have tested the proposed scheme for Argon gas using different types of mesh, such as triangular and quadrilateral or mixed, to high-speed driven cavity flow. The results show an improved flow resolution as compared with that of unadaptive mesh. Finally, we have triangular adaptive mesh to compute two near-continuum gas flows, including a supersonic flow over a cylinder and a supersonic flow over a 35° compression ramp. The results show fairly good agreement with previous studies. In summary, the computational penalties by the proposed adaptive schemes are found to be small as compared with the DSMC computation itself. Nevertheless, we have concluded that the proposed scheme is superior to the original unadaptive scheme considering the accuracy of the solution.
China`s refining/petrochemical industry continues expansion
1995-10-09
China`s downstream petroleum industry decreased refinery throughput and increased petrochemical production in 1994, compared to 1993 data. A report titled ``China Petroleum Industry `94,`` issued by China Petroleum Newsletter, a publication of China Petroleum Information Institute, summarized China`s refined products and petrochemical production figures for 1994. The report also listed important construction projects at China`s downstream plants. This paper presents data from this report.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
Decadal climate prediction with a refined anomaly initialisation approach
NASA Astrophysics Data System (ADS)
Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.; Hawkins, Ed; Nichols, Nancy K.
2016-06-01
In decadal prediction, the objective is to exploit both the sources of predictability from the external radiative forcings and from the internal variability to provide the best possible climate information for the next decade. Predicting the climate system internal variability relies on initialising the climate model from observational estimates. We present a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following key innovations: (1) the use of a weight applied to the observed anomalies, in order to avoid the risk of introducing anomalies recorded in the observed climate, whose amplitude does not fit in the range of the internal variability generated by the model; (2) the AI of the ocean density, instead of calculating it from the anomaly initialised state of temperature and salinity. An experiment initialised with this refined AI method has been compared with a full field and standard AI experiment. Results show that the use of such refinements enhances the surface temperature skill over part of the North and South Atlantic, part of the South Pacific and the Mediterranean Sea for the first forecast year. However, part of such improvement is lost in the following forecast years. For the tropical Pacific surface temperature, the full field initialised experiment performs the best. The prediction of the Arctic sea-ice volume is improved by the refined AI method for the first three forecast years and the skill of the Atlantic multidecadal oscillation is significantly increased compared to a non-initialised forecast, along the whole forecast time.
A fourth order accurate adaptive mesh refinement method forpoisson's equation
Barad, Michael; Colella, Phillip
2004-08-20
We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.
REFINING AND END USE STUDY OF COAL LIQUIDS
Unknown
2002-01-01
This document summarizes all of the work conducted as part of the Refining and End Use Study of Coal Liquids. There were several distinct objectives set, as the study developed over time: (1) Demonstration of a Refinery Accepting Coal Liquids; (2) Emissions Screening of Indirect Diesel; (3) Biomass Gasification F-T Modeling; and (4) Updated Gas to Liquids (GTL) Baseline Design/Economic Study.
A Precision Recursive Estimate for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B.
1980-01-01
A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.
Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P
2014-04-01
Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
The oculoauriculovertebral spectrum: Refining the estimate of birth prevalence.
Gabbett, Michael T
2012-06-01
The oculoauriculovertebral spectrum (OAVS) is a well-described pattern of congenital malformations primarily characterized by hemifacial microsomia and/or auricular dysplasia. However, the birth prevalence of OAVS is poorly characterized. Figures ranging from 1 in 150,000 through to 1 in 5,600 can be found in the literature - the latter figure being the most frequently quoted. This study aims to evaluate the reasons behind such discrepant figures and to refine the estimated birth prevalence of OAVS. Published reports on the incidence and prevalence of OAVS were systematically sought after. This evidence was critically reviewed. Data from appropriate studies was amalgamated to refine the estimate of the birth prevalence for OAVS. Two main reasons were identified why birth prevalence figures for OAVS are so highly discrepant: differing methods of case ascertainment and the lack of a formal definition for OAVS. This study refines the estimate of birth prevalence for OAVS to between 1 in 40,000 and 1 in 30,000. This number needs to be confirmed in a large well-designed prospective study using a formally agreed-upon definition for OAVS. PMID:27625806
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
Refinement of Atomic Structures Against cryo-EM Maps.
Murshudov, G N
2016-01-01
This review describes some of the methods for atomic structure refinement (fitting) against medium/high-resolution single-particle cryo-EM reconstructed maps. Some of the tools developed for macromolecular X-ray crystal structure analysis, especially those encapsulating prior chemical and structural information can be transferred directly for fitting into cryo-EM maps. However, despite the similarities, there are significant differences between data produced by these two techniques; therefore, different likelihood functions linking the data and model must be used in cryo-EM and crystallographic refinement. Although tools described in this review are mostly designed for medium/high-resolution maps, if maps have sufficiently good quality, then these tools can also be used at moderately low resolution, as shown in one example. In addition, the use of several popular crystallographic methods is strongly discouraged in cryo-EM refinement, such as 2Fo-Fc maps, solvent flattening, and feature-enhanced maps (FEMs) for visualization and model (re)building. Two problems in the cryo-EM field are overclaiming resolution and severe map oversharpening. Both of these should be avoided; if data of higher resolution than the signal are used, then overfitting of model parameters into the noise is unavoidable, and if maps are oversharpened, then at least parts of the maps might become very noisy and ultimately uninterpretable. Both of these may result in suboptimal and even misleading atomic models.
Refinement of Atomic Structures Against cryo-EM Maps.
Murshudov, G N
2016-01-01
This review describes some of the methods for atomic structure refinement (fitting) against medium/high-resolution single-particle cryo-EM reconstructed maps. Some of the tools developed for macromolecular X-ray crystal structure analysis, especially those encapsulating prior chemical and structural information can be transferred directly for fitting into cryo-EM maps. However, despite the similarities, there are significant differences between data produced by these two techniques; therefore, different likelihood functions linking the data and model must be used in cryo-EM and crystallographic refinement. Although tools described in this review are mostly designed for medium/high-resolution maps, if maps have sufficiently good quality, then these tools can also be used at moderately low resolution, as shown in one example. In addition, the use of several popular crystallographic methods is strongly discouraged in cryo-EM refinement, such as 2Fo-Fc maps, solvent flattening, and feature-enhanced maps (FEMs) for visualization and model (re)building. Two problems in the cryo-EM field are overclaiming resolution and severe map oversharpening. Both of these should be avoided; if data of higher resolution than the signal are used, then overfitting of model parameters into the noise is unavoidable, and if maps are oversharpened, then at least parts of the maps might become very noisy and ultimately uninterpretable. Both of these may result in suboptimal and even misleading atomic models. PMID:27572731
The oculoauriculovertebral spectrum: Refining the estimate of birth prevalence
Gabbett, Michael T.
2012-01-01
The oculoauriculovertebral spectrum (OAVS) is a well-described pattern of congenital malformations primarily characterized by hemifacial microsomia and/or auricular dysplasia. However, the birth prevalence of OAVS is poorly characterized. Figures ranging from 1 in 150,000 through to 1 in 5,600 can be found in the literature – the latter figure being the most frequently quoted. This study aims to evaluate the reasons behind such discrepant figures and to refine the estimated birth prevalence of OAVS. Published reports on the incidence and prevalence of OAVS were systematically sought after. This evidence was critically reviewed. Data from appropriate studies was amalgamated to refine the estimate of the birth prevalence for OAVS. Two main reasons were identified why birth prevalence figures for OAVS are so highly discrepant: differing methods of case ascertainment and the lack of a formal definition for OAVS. This study refines the estimate of birth prevalence for OAVS to between 1 in 40,000 and 1 in 30,000. This number needs to be confirmed in a large well-designed prospective study using a formally agreed-upon definition for OAVS.
Tracking-refinement modeling for solar-collector control
Biggs, F.
1980-01-01
A closed-loop sun-tracking control used in conjunction with an open-loop system can utilize the unique features of both methods to obtain an improved sun-tracking capability. The open-loop part of the system uses a computer with clock and ephemeris input to acquire the sun at startup, to provide alignment during cloud passage, and to give an approximate sun-tracking capability throughout the day. The closed-loop portion of the system refines this alignment in order to maximize the collected solar power. For a parabolic trough that utilizes a tube along its focal line to collect energy in a fluid, a resistance wire attached to the tube can provide the sensor for the closed-loop part of the control. This kind of tracking refinement helps to compensate for such time-dependent effects as sag of the absorber tube and deformation of the concentrator surface from gravity or wind loading, temperature gradients, and manufacturing tolerances. A model is developed to explain the behavior of a resistance wire which is wrapped around the absorber tube of a parabolic-trough concentrator and used as a sensor in a tracking-refinement control.
Shading-based DEM refinement under a comprehensive imaging model
NASA Astrophysics Data System (ADS)
Peng, Jianwei; Zhang, Yi; Shan, Jie
2015-12-01
This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.
Decontamination of steel by melt refining: A literature review
Ozturk, B.; Fruehan, R.J.
1994-12-31
It has been reported that a large amount of metal waste is produced annually by nuclear fuel processing and nuclear power plants. These metal wastes are contaminated with radioactive elements, such as uranium and plutonium. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain level. Because of high cost, it is important to develop an effective decontamination and volume reduction method for low level contaminated metals. It has been shown by some investigators that a melt refining technique can be used for the processing of the contaminated metal wastes. In this process, contaminated metal is melted wit a suitable flux. The radioactive elements are oxidized and transferred to a slag phase. In order to develop a commercial process it is important to have information on the thermodynamics and kinetics of the removal. Therefore, a literature search was carried out to evaluate the available information on the decontamination uranium and transuranic-contaminated plain steel, copper and stainless steel by melt a refining technique. Emphasis was given to the thermodynamics and kinetics of the removal. Data published in the literature indicate that it is possible to reduce the concentration of radioactive elements to a very low level by the melt refining method. 20 refs.
Evaluation of predictions in the CASP10 model refinement category
Nugent, Timothy; Cozzetto, Domenico; Jones, David T
2014-01-01
Here we report on the assessment results of the third experiment to evaluate the state of the art in protein model refinement, where participants were invited to improve the accuracy of initial protein models for 27 targets. Using an array of complementary evaluation measures, we find that five groups performed better than the naïve (null) method—a marked improvement over CASP9, although only three were significantly better. The leading groups also demonstrated the ability to consistently improve both backbone and side chain positioning, while other groups reliably enhanced other aspects of protein physicality. The top-ranked group succeeded in improving the backbone conformation in almost 90% of targets, suggesting a strategy that for the first time in CASP refinement is successful in a clear majority of cases. A number of issues remain unsolved: the majority of groups still fail to improve the quality of the starting models; even successful groups are only able to make modest improvements; and no prediction is more similar to the native structure than to the starting model. Successful refinement attempts also often go unrecognized, as suggested by the relatively larger improvements when predictions not submitted as model 1 are also considered. Proteins 2014; 82(Suppl 2):98–111. PMID:23900810
The state of animal welfare in the context of refinement.
Zurlo, Joanne; Hutchinson, Eric
2014-01-01
The ultimate goal of the Three Rs is the full replacement of animals used in biomedical research and testing. However, replacement is unlikely to occur in the near future; therefore the scientific community as a whole must continue to devote considerable effort to ensure optimal animal welfare for the benefit of the science and the animals, i.e., the R of refinement. Laws governing the care and use of laboratory animals have recently been revised in Europe and the US and these place greater emphasis on promoting the well-being of the animals in addition to minimizing pain and distress. Social housing for social species is now the default condition, which can present a challenge in certain experimental settings and for certain species. The practice of positive reinforcement training of laboratory animals, particularly non-human primates, is gathering momentum but is not yet universally employed. Enhanced consideration of refinement extends to rodents, particularly mice, whose use is still increasing as more genetically modified models are generated. The wastage of extraneous mice and the method of their euthanasia are refinement issues that still need to be addressed. An international, concerted effort into defining the needs of laboratory animals is still necessary to improve the quality of the animal models used as well as their welfare.
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Shugen; Liu, Xiuguo
2016-06-01
Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.
A new adaptive mesh refinement data structure with an application to detonation
NASA Astrophysics Data System (ADS)
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2011 CFR
2011-07-01
... January 1, 2011; (ii) Beginning January 1, 2015 for small refiners approved under § 80.1340; (iii) Beginning January 1 of the year prior to 2015 in which a small refiner approved under § 80.1340 has...
40 CFR 80.1347 - What are the sampling and testing requirements for refiners and importers?
Code of Federal Regulations, 2010 CFR
2010-07-01
... January 1, 2011; (ii) Beginning January 1, 2015 for small refiners approved under § 80.1340; (iii) Beginning January 1 of the year prior to 2015 in which a small refiner approved under § 80.1340 has...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2011 CFR
2011-07-01
... eligible for the hardship provisions for small refiners: (a) A refiner with one or more refineries built... employees or crude capacity is due to operational changes at the refinery or a company sale...
40 CFR 80.1142 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2011 CFR
2011-07-01
... company owned the refinery as of December 31, 2004. (4) Name, address, phone number, facsimile number, and... approved small refiners. (f) If EPA finds that a refiner provided false or inaccurate information in...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... Internal Revenue Service Credit for Renewable Electricity Production, Refined Coal Production, and Indian Coal Production, and Publication of Inflation Adjustment Factors and Reference Prices for Calendar Year... in determining the availability of the credit for renewable electricity production, refined...
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Craig Loehle, Ph. D.
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Mirjalili, Vahid; Feig, Michael
2013-02-12
A molecular dynamics (MD) simulation based protocol for structure refinement of template-based model predictions is described. The protocol involves the application of restraints, ensemble averaging of selected subsets, interpolation between initial and refined structures, and assessment of refinement success. It is found that sub-microsecond MD-based sampling when combined with ensemble averaging can produce moderate but consistent refinement for most systems in the CASP targets considered here.
Geophysical Inversion through Hierarchical Genetic Algorithm Scheme
NASA Astrophysics Data System (ADS)
Furman, Alex; Huisman, Johan A.
2010-05-01
Geophysical investigation is a powerful tool that allows non-invasive and non-destructive mapping of subsurface states and properties. However, non-uniqueness associated with the inversion process halts these methods from becoming of more quantitative use. One major direction researchers are going is constraining the inverse problem by hydrological observations and models. An alternative to the commonly used direct inversion methods are global optimization schemes (such as genetic algorithms and Monte Carlo Markov Chain methods). However, the major limitation here is the desired high resolution of the tomographic image, which leads to a large number of parameters and an unreasonably high computational effort when using global optimization schemes. One way to overcome these problems is to combine the advantages of both direct and global inversion methods through hierarchical inversion. That is, starting the inversion with relatively coarse resolution of parameters, achieving good inversion using one of the two inversion schemes (global or direct), and then refining the resolution and applying a combination of global and direct inversion schemes for the whole domain or locally. In this work we explore through synthetic case studies the option of using a global optimization scheme for inversion of electrical resistivity tomography data through hierarchical refinement of the model resolution.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
76 FR 61074 - USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-03
... Office of the Secretary USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar AGENCY... increase in the fiscal year (FY) 2011 refined sugar tariff-rate quota (TRQ) of 136,078 metric tons raw... MTRV for sugars, syrups, and molasses (collectively referred to as refined sugar) described...
78 FR 25415 - Waivers Under the Refined Sugar Re-Export Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... Office of the Secretary Waivers Under the Refined Sugar Re-Export Program AGENCY: Office of the Secretary... waiving certain provisions in the Refined Sugar Re-Export Program, effective today. These actions are authorized under the waiver authority for the Refined Sugar Re-Export Program regulation at 7 CFR...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Polyhydric alcohol esters of oxidatively refined... SANITIZERS Certain Adjuvants and Production Aids § 178.3770 Polyhydric alcohol esters of oxidatively refined (Gersthofen process) montan wax acids. Polyhydric alcohol esters of oxidatively refined (Gersthofen...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2014 CFR
2014-07-01
... joint venture partners. (iii) The refiner had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... government employees. (vi) The total corporate crude oil capacity of each refinery as reported to the...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2013 CFR
2013-07-01
... joint venture partners. (iii) The refiner had a corporate-average crude oil capacity less than or equal... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... government employees. (vi) The total corporate crude oil capacity of each refinery as reported to the...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... AGENCY Proposed CERCLA Administrative Settlement Agreement and Order on Consent for the Mercury Refining... ``Settling Parties'') pertaining to the Mercury Refining Superfund Site (``Site'') located in the Towns of... each Settling Party to the EPA Hazardous Substance Superfund Mercury Refining Superfund Site...
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, or retrieving relevant data for the user.
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
Comparison of local grid refinement methods for MODFLOW
Mehl, S.; Hill, M.C.; Leake, S.A.
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).
Volumetric manifestation of grain refinement in undercooled metal
Xiao, J.Z.; Kui, H.W.
1997-10-01
Experience indicates that when a molten metal crystallizes below its melting temperature T{sub 1} (T{sub 1} stands for the thermodynamic melting temperature of pure metallic elements or the liquidus of alloys), the microstructure of the undercooled specimen depends on the initial bulk undercooling {Delta}T of the undercooled melt, which is defined as {Delta}T = T{sub 1} {minus} T where T is the kinetic crystallization temperature. For instance, Walker found that the grain size of undercooled Ni undergoes a rapid drop by as much as two orders of magnitude in a narrow temperature range, from {Delta}T = 155 to 170 K. This phenomenon is termed grain refinement. Xiao et al. on the other hand demonstrated that grain refinement in undercooled Cu{sub 30}Ni{sub 70} is brought about by a re-melting of dendrites. The re-melting mechanism as occurred in undercooled Cu{sub 30}Ni{sub 70} is made possible by a substantial composition variation, or equivalently varying liquidus T{sub 1}, across the length of the dendrites at {Delta}T {ge} {Delta}T*. Upon solidification, the heat of crystallization can bring the temperature of the system back to T{sub 1} causing re-melting of the inhomogeneous dendrites. It is also noted that during microstructural analysis, voids are always present in an undercooled Cu{sub 30}Ni{sub 70} specimen. Furthermore, their distribution varies from one specimen to another of different {Delta}T. More importantly, it appears that the total volume of the voids present in one undercooled specimen differs from that of another undercooled specimen of different {Delta}T. In this paper, the specific volume of undercooled Cu{sub 30}Ni{sub 70} was measured as a function of {Delta}T in order to establish a correlation between the grain refinement transition and the specific volume.
Refinement of experimental design and conduct in laboratory animal research.
Bailoo, Jeremy D; Reichlin, Thomas S; Würbel, Hanno
2014-01-01
The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research.
AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics
NASA Astrophysics Data System (ADS)
Plewa, T.; Müller, E.
2001-08-01
Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.
Refinement of Representation Theorems for Context-Free Languages
NASA Astrophysics Data System (ADS)
Fujioka, Kaoru
In this paper, we obtain some refinement of representation theorems for context-free languages by using Dyck languages, insertion systems, strictly locally testable languages, and morphisms. For instance, we improved the Chomsky-Schützenberger representation theorem and show that each context-free language L can be represented in the form L = h (D ∩ R), where D is a Dyck language, R is a strictly 3-testable language, and h is a morphism. A similar representation for context-free languages can be obtained, using insertion systems of weight (3, 0) and strictly 4-testable languages.
Computational relativistic astrophysics with adaptive mesh refinement: Testbeds
Evans, Edwin; Iyer, Sai; Tao Jian; Wolfmeyer, Randy; Zhang Huimin; Schnetter, Erik; Suen, Wai-Mo
2005-04-15
We have carried out numerical simulations of strongly gravitating systems based on the Einstein equations coupled to the relativistic hydrodynamic equations using adaptive mesh refinement (AMR) techniques. We show AMR simulations of NS binary inspiral and coalescence carried out on a workstation having an accuracy equivalent to that of a 1025{sup 3} regular unigrid simulation, which is, to the best of our knowledge, larger than all previous simulations of similar NS systems on supercomputers. We believe the capability opens new possibilities in general relativistic simulations.
Crystal chemistry and structure refinement of five hydrated calcium borates
Clark, J.R.; Appleman, D.E.; Christ, C.L.
1964-01-01
The crystal structures of the five known members of the series Ca2B6O11??xH2O (x = 1, 5, 5, 7, 9, and 13) have been refined by full-matrix least-squares techniques, yielding bond distances and angles with standard errors of less than 0??01 A?? and 0??5??, respectively. The results illustrate the crystal chemical principles that govern the structures of hydrated borate compounds. The importance of hydrogen bonding in the ferroelectric transition of colemanite is confirmed by more accurate proton assignments. ?? 1964.
Recent refinements to cranial implants for rhesus macaques (Macaca mulatta).
Johnston, Jessica M; Cohen, Yale E; Shirley, Harry; Tsunada, Joji; Bennur, Sharath; Christison-Lagay, Kate; Veeder, Christin L
2016-05-01
The advent of cranial implants revolutionized primate neurophysiological research because they allow researchers to stably record neural activity from monkeys during active behavior. Cranial implants have improved over the years since their introduction, but chronic implants still increase the risk for medical complications including bacterial contamination and resultant infection, chronic inflammation, bone and tissue loss and complications related to the use of dental acrylic. These complications can lead to implant failure and early termination of study protocols. In an effort to reduce complications, we describe several refinements that have helped us improve cranial implants and the wellbeing of implanted primates. PMID:27096188
Refining the nuclear auxin response pathway through structural biology.
Korasick, David A; Jez, Joseph M; Strader, Lucia C
2015-10-01
Auxin is a key regulator of plant growth and development. Classical molecular and genetic techniques employed over the past 20 years identified the major players in auxin-mediated gene expression and suggest a canonical auxin response pathway. In recent years, structural and biophysical studies clarified the molecular details of auxin perception, the recognition of DNA by auxin transcription factors, and the interaction of auxin transcription factors with repressor proteins. These studies refine the auxin signal transduction model and raise new questions that increase the complexity of auxin signaling.
Minimally refined biomass fuel. [carbohydrate-water-alcohol mixture
Pearson, R.K.; Hirschfeld, T.B.
1981-03-26
A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water-solubilizes the carbohydrate; and the alcohol aids in the combustion of the carbohydrate and reduces the viscosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.
Cogeneration handbook for the petroleum refining industry. [Contains glossary
Fassbender, L.L.; Garrett-Price, B.A.; Moore, N.L.; Fassbender, A.G.; Eakin, D.E.; Gorges, H.A.
1984-03-01
The decision of whether to cogenerate involves several considerations, including technical, economic, environmental, legal, and regulatory issues. Each of these issues is addressed separately in this handbook. In addition, a chapter is included on preparing a three-phase work statement, which is needed to guide the design of a cogeneration system. In addition, an annotated bibliography and a glossary of terminology are provided. Appendix A provides an energy-use profile of the petroleum refining industry. Appendices B through O provide specific information that will be called out in subsequent chapters.
Corpus-based identification and refinement of semantic classes.
Nazarenko, A.; Zweigenbaum, P.; Bouaud, J.; Habert, B.
1997-01-01
Medical Language Processing (MLP), especially in specific domains, requires fine-grained semantic lexica. We examine whether robust natural language processing tools used on a representative corpus of a domain help in building and refining a semantic categorization. We test this hypothesis with ZELLIG, a corpus analysis tool. The first clusters we obtain are consistent with a model of the domain, as found in the SNOMED nomenclature. They correspond to coarse-grained semantic categories, but isolate as well lexical idiosyncrasies belonging to the clinical sub-language. Moreover, they help categorize additional words. PMID:9357693
Adaptive mesh refinement for 1-dimensional gas dynamics
Hedstrom, G.; Rodrigue, G.; Berger, M.; Oliger, J.
1982-01-01
We consider the solution of the one-dimensional equation of gas-dynamics. Accurate numerical solutions are difficult to obtain on a given spatial mesh because of the existence of physical regions where components of the exact solution are either discontinuous or have large gradient changes. Numerical methods treat these phenomena in a variety of ways. In this paper, the method of adaptive mesh refinement is used. A thorough description of this method for general hyperbolic systems is given elsewhere and only properties of the method pertinent to the system are elaborated.
ExpertBayes: Automatically refining manually built Bayesian networks
Almeida, Ezilda; Ferreira, Pedro; Vinhoza, Tiago; Dutra, Inês; Li, Jingwei; Wu, Yirong; Burnside, Elizabeth
2015-01-01
Bayesian network structures are usually built using only the data and starting from an empty network or from a naïve Bayes structure. Very often, in some domains, like medicine, a prior structure knowledge is already known. This structure can be automatically or manually refined in search for better performance models. In this work, we take Bayesian networks built by specialists and show that minor perturbations to this original network can yield better classifiers with a very small computational cost, while maintaining most of the intended meaning of the original model. PMID:27066596
Refining and separation of crude tall-oil components
Nogueira, J.M.F.
1996-10-01
Methods for crude tall-oil refining and fractionation evolving research studies of long-chain fatty and resinic acids separation are reviewed. Although several techniques have been applied since the 1940s with industrial aims, only distillation under high vacuum is economically practicable for crude tall-oil fractionation. Techniques such as adsorption and dissociation extraction seem to be the most industrially promising for implementation in the future for the separation of long-chain fatty and resinic acids fractions with a high purity level at low cost.
Growth of CZT using additionally zone-refined raw materials
NASA Astrophysics Data System (ADS)
Knuteson, David J.; Berghmans, Andre; Kahler, David; Wagner, Brian; King, Matthew; Mclaughlin, Sean; Bolotnikov, Aleksey; James, Ralph; Singh, Narsingh B.
2012-10-01
Results will be presented for the growth of CdZnTe by the low pressure Bridgman growth technique. To decrease deeplevel trapping and improve detector performance, high purity commercial raw materials will be further zone refined to reduce impurities. The purified materials will then be compounded into a charge for crystal growth. The crystals will be grown in the programmable multi-zone furnace (PMZF), which was designed and built at Northrop Grumman's Bethpage facility to grow CZT on Space Shuttle missions. Results of the purification and crystal growth will be presented as well as characterization of crystal quality and detector performance.
Refinement of Phobos Ephemeris Using Mars Orbiter Laser Altimeter Radiometry
NASA Technical Reports Server (NTRS)
Neumann, G. A.; Bills, B. G.; Smith, D. E.; Zuber, M. T.
2004-01-01
Radiometric observations from the Mars Orbiter Laser Altimeter (MOLA) can be used to improve the ephemeris of Phobos, with particular interest in refining estimates of the secular acceleration due to tidal dissipation within Mars. We have searched the Mars Orbiter Laser Altimeter (MOLA) radiometry data for shadows cast by the moon Phobos, finding 7 such profiles during the Mapping and Extended Mission phases, and 5 during the last two years of radiometry operations. Preliminary data suggest that the motion of Phobos has advanced by one or more seconds beyond that predicted by the current ephemerides, and the advance has increased over the 5 years of Mars Global Surveyor (MGS) operations.
Petroleum mineral oil refining and evaluation of cancer hazard.
Mackerer, Carl R; Griffis, Larry C; Grabowski Jr, John S; Reitman, Fred A
2003-11-01
Petroleum base oils (petroleum mineral oils) are manufactured from crude oils by vacuum distillation to produce several distillates and a residual oil that are then further refined. Aromatics including alkylated polycyclic aromatic compounds (PAC) are undesirable constituents of base oils because they are deleterious to product performance and are potentially carcinogenic. In modern base oil refining, aromatics are reduced by solvent extraction, catalytic hydrotreating, or hydrocracking. Chronic exposure to poorly refined base oils has the potential to cause skin cancer. A chronic mouse dermal bioassay has been the standard test for estimating carcinogenic potential of mineral oils. The level of alkylated 3-7-ring PAC in raw streams from the vacuum tower must be greatly reduced to render the base oil noncarcinogenic. The processes that can reduce PAC levels are known, but the operating conditions for the processing units (e.g., temperature, pressure, catalyst type, residence time in the unit, unit engineering design, etc.) needed to achieve adequate PAC reduction are refinery specific. Chronic dermal bioassays provide information about whether conditions applied can make a noncarcinogenic oil, but cannot be used to monitor current production for quality control or for conducting research or developing new processes since this test takes at least 78 weeks to conduct. Three short-term, non-animal assays all involving extraction of oil with dimethylsulfoxide (DMSO) have been validated for predicting potential carcinogenic activity of petroleum base oils: a modified Ames assay of a DMSO extract, a gravimetric assay (IP 346) for wt. percent of oil extracted into DMSO, and a GC-FID assay measuring 3-7-ring PAC content in a DMSO extract of oil, expressed as percent of the oil. Extraction with DMSO concentrates PAC in a manner that mimics the extraction method used in the solvent refining of noncarcinogenic oils. The three assays are described, data demonstrating the
A Task-parallel Clustering Algorithm for Structured AMR
Gunney, B N; Wissink, A M
2004-11-02
A new parallel algorithm, based on the Berger-Rigoutsos algorithm for clustering grid points into logically rectangular regions, is presented. The clustering operation is frequently performed in the dynamic gridding steps of structured adaptive mesh refinement (SAMR) calculations. A previous study revealed that although the cost of clustering is generally insignificant for smaller problems run on relatively few processors, the algorithm scaled inefficiently in parallel and its cost grows with problem size. Hence, it can become significant for large scale problems run on very large parallel machines, such as the new BlueGene system (which has {Omicron}(10{sup 4}) processors). We propose a new task-parallel algorithm designed to reduce communication wait times. Performance was assessed using dynamic SAMR re-gridding operations on up to 16K processors of currently available computers at Lawrence Livermore National Laboratory. The new algorithm was shown to be up to an order of magnitude faster than the baseline algorithm and had better scaling trends.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo
1999-05-01
This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.
Extraction and refinement of building faces in 3D point clouds
NASA Astrophysics Data System (ADS)
Pohl, Melanie; Meidow, Jochen; Bulatov, Dimitri
2013-10-01
In this paper, we present an approach to generate a 3D model of an urban scene out of sensor data. The first milestone on that way is to classify the sensor data into the main parts of a scene, such as ground, vegetation, buildings and their outlines. This has already been accomplished within our previous work. Now, we propose a four-step algorithm to model the building structure, which is assumed to consist of several dominant planes. First, we extract small elevated objects, like chimneys, using a hot-spot detector and handle the detected regions separately. In order to model the variety of roof structures precisely, we split up complex building blocks into parts. Two different approaches are used: To act on the assumption of underlying 2D ground polygons, we use geometric methods to divide them into sub-polygons. Without polygons, we use morphological operations and segmentation methods. In the third step, extraction of dominant planes takes place, by using either RANSAC or J-linkage algorithm. They operate on point clouds of sufficient confidence within the previously separated building parts and give robust results even with noisy, outlier-rich data. Last, we refine the previously determined plane parameters using geometric relations of the building faces. Due to noise, these expected properties of roofs and walls are not fulfilled. Hence, we enforce them as hard constraints and use the previously extracted plane parameters as initial values for an optimization method. To test the proposed workflow, we use both several data sets, including noisy data from depth maps and data computed by laser scanning.
2010-01-01
Background The Medium-chain Dehydrogenases/Reductases (MDR) form a protein superfamily whose size and complexity defeats traditional means of subclassification; it currently has over 15000 members in the databases, the pairwise sequence identity is typically around 25%, there are members from all kingdoms of life, the chain-lengths vary as does the oligomericity, and the members are partaking in a multitude of biological processes. There are profile hidden Markov models (HMMs) available for detecting MDR superfamily members, but none for determining which MDR family each protein belongs to. The current torrential influx of new sequence data enables elucidation of more and more protein families, and at an increasingly fine granularity. However, gathering good quality training data usually requires manual attention by experts and has therefore been the rate limiting step for expanding the number of available models. Results We have developed an automated algorithm for HMM refinement that produces stable and reliable models for protein families. This algorithm uses relationships found in data to generate confident seed sets. Using this algorithm we have produced HMMs for 86 distinct MDR families and 34 of their subfamilies which can be used in automated annotation of new sequences. We find that MDR forms with 2 Zn2+ ions in general are dehydrogenases, while MDR forms with no Zn2+ in general are reductases. Furthermore, in Bacteria MDRs without Zn2+ are more frequent than those with Zn2+, while the opposite is true for eukaryotic MDRs, indicating that Zn2+ has been recruited into the MDR superfamily after the initial life kingdom separations. We have also developed a web site http://mdr-enzymes.org that provides textual and numeric search against various characterised MDR family properties, as well as sequence scan functions for reliable classification of novel MDR sequences. Conclusions Our method of refinement can be readily applied to create stable and reliable HMMs
A robust algorithm for optimizing protein structures with NMR chemical shifts.
Berjanskii, Mark; Arndt, David; Liang, Yongjie; Wishart, David S
2015-11-01
Over the past decade, a number of methods have been developed to determine the approximate structure of proteins using minimal NMR experimental information such as chemical shifts alone, sparse NOEs alone or a combination of comparative modeling data and chemical shifts. However, there have been relatively few methods that allow these approximate models to be substantively refined or improved using the available NMR chemical shift data. Here, we present a novel method, called Chemical Shift driven Genetic Algorithm for biased Molecular Dynamics (CS-GAMDy), for the robust optimization of protein structures using experimental NMR chemical shifts. The method incorporates knowledge-based scoring functions and structural information derived from NMR chemical shifts via a unique combination of multi-objective MD biasing, a genetic algorithm, and the widely used XPLOR molecular modelling language. Using this approach, we demonstrate that CS-GAMDy is able to refine and/or fold models that are as much as 10 Å (RMSD) away from the correct structure using only NMR chemical shift data. CS-GAMDy is also able to refine of a wide range of approximate or mildly erroneous protein structures to more closely match the known/correct structure and the known/correct chemical shifts. We believe CS-GAMDy will allow protein models generated by sparse restraint or chemical-shift-only methods to achieve sufficiently high quality to be considered fully refined and "PDB worthy". The CS-GAMDy algorithm is explained in detail and its performance is compared over a range of refinement scenarios with several commonly used protein structure refinement protocols. The program has been designed to be easily installed and easily used and is available at http://www.gamdy.ca.