Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo
NASA Astrophysics Data System (ADS)
Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.
1999-09-01
Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Algorithm refinement for the stochastic Burgers' equation
Bell, John B.; Foo, Jasmine; Garcia, Alejandro L. . E-mail: algarcia@algarcia.org
2007-04-10
In this paper, we develop an algorithm refinement (AR) scheme for an excluded random walk model whose mean field behavior is given by the viscous Burgers' equation. AR hybrids use the adaptive mesh refinement framework to model a system using a molecular algorithm where desired while allowing a computationally faster continuum representation to be used in the remainder of the domain. The focus in this paper is the role of fluctuations on the dynamics. In particular, we demonstrate that it is necessary to include a stochastic forcing term in Burgers' equation to accurately capture the correct behavior of the system. The conclusion we draw from this study is that the fidelity of multiscale methods that couple disparate algorithms depends on the consistent modeling of fluctuations in each algorithm and on a coupling, such as algorithm refinement, that preserves this consistency.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Refined Genetic Algorithms for Polypeptide Structure Prediction.
1996-12-01
designing no v el proteins, in deco ding the information obtained from the Human Genome Pro ject (91), in designing new drugs, and in trying to...function that assigns tness v alues to p ossible solutions and an enco de/ deco de b et w een the algorithm and problem spaces. Al- though these metho ds...genetic algorithms: In tro duction and o v erview of curren t researc h. Parallel Genetic Algorithms, pages 5{35, 1993. 22. Bruce S. Duncan . P arallel ev
Experiences with an adaptive mesh refinement algorithm in numerical relativity.
NASA Astrophysics Data System (ADS)
Choptuik, M. W.
An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.
Algorithm Refinement for Stochastic Partial Differential Equations. I. Linear Diffusion
NASA Astrophysics Data System (ADS)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2002-10-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. Results from a variety of numerical experiments are presented for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a nonstochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except in particle regions away from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes
Parsons, I D; Solberg, J M
2006-02-03
This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.
Convergence and refinement of the Wang Landau algorithm
NASA Astrophysics Data System (ADS)
Lee, Hwee Kuan; Okabe, Yutaka; Landau, D. P.
2006-07-01
Recently, Wang and Landau proposed a new random walk algorithm that can be very efficiently applied to many problems. Subsequently, there has been numerous studies on the algorithm itself and many proposals for improvements were put forward. However, fundamental questions such as what determines the rate of convergence has not been answered. To understand the mechanism behind the Wang-Landau method, we did an error analysis and found that a steady state is reached where the fluctuations in the accumulated energy histogram saturate at values proportional to [. This value is closely related to the error corrections to the Wang-Landau method. We also study the rate of convergence using different "tuning" parameters in the algorithm.
Refinements to an Optimized Model-Driven Bathymetry Deduction Algorithm
2001-09-01
bathymetric deduction algorithm, we used the Korteweg - deVries (KdV) equation ( Korteweg and deVries 1895) as the wave model. Throughout this study, we will be...technique is explained in an appendix of the manuscript. In the interest of brevity, we simply write the matrix equation to be solved : ηµ ∆+=∆ TTh...the wavelength). Bell (1999) used phase speeds calculated from X-band radar imagery and Equation (1) to infer the bathymetry, with favorable
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
Improvement and Refinement of the GPS/MET Data Analysis Algorithm
NASA Technical Reports Server (NTRS)
Herman, Benjamin M.
2003-01-01
The GPS/MET project was a satellite-to-satellite active microwave atmospheric limb sounder using the Global Positioning System transmitters as signal sources. Despite its remarkable success, GPS/MET could not independently sense atmospheric water vapor and ozone. Additionally the GPS/MET data retrieval algorithm needs to be further improved and refined to enhance the retrieval accuracies in the lower tropospheric region and the upper stratospheric region. The objectives of this proposal were to address these 3 problem areas.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids
NASA Technical Reports Server (NTRS)
Benson, Rusty A.; Mcrae, D. S.
1991-01-01
An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.
A Comparative Study of the English Verbs "Say"/"Tell" and the Hebrew Words "Amar"/"Siper."
ERIC Educational Resources Information Center
Sopher, H.
1987-01-01
Compares the use of the English verbs "say" and "tell" and the Hebrew verbs "amar" and "siper" and then examines the degree of correspondence between "say" and "amar" and between "tell" and "siper." (CB)
NASA Astrophysics Data System (ADS)
Zhang, A.; Guo, Z.; Xiong, S.-M.
2017-03-01
Eutectic pattern transition under an externally imposed temperature gradient was studied using the phase field method coupled with a novel parallel adaptive-mesh-refinement (Para-AMR) algorithm. Numerical tests revealed that the Para-AMR algorithm could improve the computational efficiency by two orders of magnitude and thus made it possible to perform large-scale simulations without any compromising accuracy. Results showed that the direction of the temperature gradient played a crucial role in determining the eutectic patterns during solidification, which agreed well with experimental observations. In particular, the presence of the transverse temperature gradient could tilt the eutectic patterns, and in 3D simulations, the eutectic microstructure would alter from lamellar to rod-like and/or from rod-like to dumbbell-shaped. Furthermore, under a radial temperature gradient, the eutectic would evolve from a dumbbell-shaped or clover-shaped pattern to an isolated rod-like pattern.
1988-10-01
paper acknowledges U. S . Government sponsorship. References to this work should be either to the published version, if any, or in the form I "private...referred to as nodal analysis(DES89], which involves applying the Kirchoff Current Law (KCL) to each node in the circuit, and applying the constitutive...equations one row at time, guessing values for the z,’ s that have not been computed. This leads to Algorithm 1, the Gauss-Seidel relaxation algorithm, which
Refinement-cut: user-guided segmentation algorithm for translational science.
Egger, Jan
2014-06-04
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.
Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science
Egger, Jan
2014-01-01
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D. PMID:24893650
Refinement-Cut: User-Guided Segmentation Algorithm for Translational Science
NASA Astrophysics Data System (ADS)
Egger, Jan
2014-06-01
In this contribution, a semi-automatic segmentation algorithm for (medical) image analysis is presented. More precise, the approach belongs to the category of interactive contouring algorithms, which provide real-time feedback of the segmentation result. However, even with interactive real-time contouring approaches there are always cases where the user cannot find a satisfying segmentation, e.g. due to homogeneous appearances between the object and the background, or noise inside the object. For these difficult cases the algorithm still needs additional user support. However, this additional user support should be intuitive and rapid integrated into the segmentation process, without breaking the interactive real-time segmentation feedback. I propose a solution where the user can support the algorithm by an easy and fast placement of one or more seed points to guide the algorithm to a satisfying segmentation result also in difficult cases. These additional seed(s) restrict(s) the calculation of the segmentation for the algorithm, but at the same time, still enable to continue with the interactive real-time feedback segmentation. For a practical and genuine application in translational science, the approach has been tested on medical data from the clinical routine in 2D and 3D.
Chillag-Talmor, Orly; Giladi, Nir; Linn, Shai; Gurevich, Tanya; El-Ad, Baruch; Silverman, Barbara; Friedman, Nurit; Peretz, Chava
2011-01-01
Estimating rates of Parkinson's disease (PD) is essential for health services planning and studies of disease determinants. However, few PD registries exist. We aimed to estimate annual prevalence and incidence of PD in a large Israeli population over the past decade using computerized drug purchase data. Based on profiles of anti-parkinsonian drugs, age at first purchase, purchase density, and follow-up time, we developed a refined algorithm for PD assessment (definite, probable or possible) and validated it against clinical diagnoses. We used the prescription database of the second largest Health Maintenance Organization in Israel (covers ~25% of population), for the years 1998-2008. PD rates by age, gender and year were calculated and compared using Poisson models. The algorithm was found to be highly sensitive (96%) for detecting PD cases. We identified 7,134 prevalent cases (67% definite/probable), and 5,288 incident cases (65% definite/probable), with mean age at first purchase 69 ± 13 years. Over the years 2000-2007, PD incidence rate of 33/100,000 was stable, and the prevalence rate increased from 170/100,000 to 256/100,000. For ages 50+, 60+, 70+, median prevalence rates were 1%, 2%, 3%, respectively. Incidence rates also increased with age (RR = 1.76, 95%CI 1.75-1.77, ages 50+, 5-year interval). For ages 50+, rates were higher among men for both prevalence (RR = 1.38, 95%CI 1.37-1.39) and incidence (RR = 1.45, 95%CI 1.42-1.48). In conclusion, our refined algorithm for PD assessment, based on computerized drug purchases data, may be a reliable tool for population-based studies. The findings indicate a burden of PD in Israel higher than previously assumed.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385
NASA Astrophysics Data System (ADS)
Barrett, James
The incorporation of small, privately owned generation operating in parallel with, and supplying power to, the distribution network is becoming more widespread. This method of operation does however have problems associated with it. In particular, a loss of the connection to the main utility supply which leaves a portion of the utility load connected to the embedded generator will result in a power island. This situation presents possible dangers to utility personnel and the public, complications for smooth system operation and probable plant damage should the two systems be reconnected out-of-synchronism. Loss of Grid (or Islanding), as this situation is known, is the subject of this thesis. The work begins by detailing the requirements for operation of generation embedded in the utility supply with particular attention drawn to the requirements for a loss of grid protection scheme. The mathematical basis for a new loss of grid protection algorithm is developed and the inclusion of the algorithm in an integrated generator protection scheme described. A detailed description is given on the implementation of the new algorithm in a microprocessor based relay hardware to allow practical tests on small embedded generation facilities, including an in-house multiple generator test facility. The results obtained from the practical tests are compared with those obtained from simulation studies carried out in previous work and the differences are discussed. The performance of the algorithm is enhanced from the theoretical algorithm developed using the simulation results with simple filtering together with pattern recognition techniques. This provides stability during severe load fluctuations under parallel operation and system fault conditions and improved performance under normal operating conditions and for loss of grid detection. In addition to operating for a loss of grid connection, the algorithm will respond to load fluctuations which occur within a power island
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
NASA Astrophysics Data System (ADS)
Eaves, Nick A.; Zhang, Qingan; Liu, Fengshan; Guo, Hongsheng; Dworkin, Seth B.; Thomson, Murray J.
2016-10-01
Mitigation of soot emissions from combustion devices is a global concern. For example, recent EURO 6 regulations for vehicles have placed stringent limits on soot emissions. In order to allow design engineers to achieve the goal of reduced soot emissions, they must have the tools to so. Due to the complex nature of soot formation, which includes growth and oxidation, detailed numerical models are required to gain fundamental insights into the mechanisms of soot formation. A detailed description of the CoFlame FORTRAN code which models sooting laminar coflow diffusion flames is given. The code solves axial and radial velocity, temperature, species conservation, and soot aggregate and primary particle number density equations. The sectional particle dynamics model includes nucleation, PAH condensation and HACA surface growth, surface oxidation, coagulation, fragmentation, particle diffusion, and thermophoresis. The code utilizes a distributed memory parallelization scheme with strip-domain decomposition. The public release of the CoFlame code, which has been refined in terms of coding structure, to the research community accompanies this paper. CoFlame is validated against experimental data for reattachment length in an axi-symmetric pipe with a sudden expansion, and ethylene-air and methane-air diffusion flames for multiple soot morphological parameters and gas-phase species. Finally, the parallel performance and computational costs of the code is investigated.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
Refined algorithms for star-based monitoring of GOES Imager visible-channel responsivities
NASA Astrophysics Data System (ADS)
Chang, I.-Lok; Dean, Charles; Li, Zhenping; Weinreb, Michael; Wu, Xiangqian; Swamy, P. A. V. B.
2012-09-01
Monitoring the responsivities of the visible channels of the Imagers on GOES satellites is a continuing effort at the National Environmental Satellite, Data and Information Service of NOAA. At this point, a large part of the initial processing of the star data depends on the operationalGOES Sensor Processing System(SPS) and GOES Orbit and AttitudeTracking System (OATS) for detecting the presence of stars and computing the amplitudes of the star signals. However, the algorithms of the SPS and the OATS are not optimized for calculating the amplitudes of the star signals, as they had been developed to determine pixel location and observation time of a star, not amplitude. Motivated by our wish to be independent of the SPS and the OATS for data processing and to improve the accuracy of the computed star signals, we have developed our own methods for such computations. We describe the principal algorithms and discuss their implementation. Next we show our monitoring statistics derived from star observations by the Imagers aboard GOES-8, -10, -11, -12 and -13. We give a brief introduction to a new class of time series that have improved the stability and reliability of our degradation estimates.
Chen, Brian Y; Fofanov, Viacheslav Y; Bryant, Drew H; Dodson, Bradley D; Kristensen, David M; Lisewski, Andreas M; Kimmel, Marek; Lichtarge, Olivier; Kavraki, Lydia E
2007-01-01
The development of new and effective drugs is strongly affected by the need to identify drug targets and to reduce side effects. Resolving these issues depends partially on a thorough understanding of the biological function of proteins. Unfortunately, the experimental determination of protein function is expensive and time consuming. To support and accelerate the determination of protein functions, algorithms for function prediction are designed to gather evidence indicating functional similarity with well studied proteins. One such approach is the MASH pipeline, described in the first half of this paper. MASH identifies matches of geometric and chemical similarity between motifs, representing known functional sites, and substructures of functionally uncharacterized proteins (targets). Observations from several research groups concur that statistically significant matches can indicate functionally related active sites. One major subproblem is the design of effective motifs, which have many matches to functionally related targets (sensitive motifs), and few matches to functionally unrelated targets (specific motifs). Current techniques select and combine structural, physical, and evolutionary properties to generate motifs that mirror functional characteristics in active sites. This approach ignores incidental similarities that may occur with functionally unrelated proteins. To address this problem, we have developed Geometric Sieving (GS), a parallel distributed algorithm that efficiently refines motifs, designed by existing methods, into optimized motifs with maximal geometric and chemical dissimilarity from all known protein structures. In exhaustive comparison of all possible motifs based on the active sites of 10 well-studied proteins, we observed that optimized motifs were among the most sensitive and specific.
Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto
2010-03-01
This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
An updated partiality model and post-refinement algorithm for XFEL snapshot diffraction data is presented and confirmed by observing anomalous density for S atoms at an X-ray wavelength of 1.3 Å. Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the R{sub split} value) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will
Commentary to "Multiple Grammars and Second Language Representation," by Luiz Amaral and Tom Roeper
ERIC Educational Resources Information Center
Pérez-Leroux, Ana T.
2014-01-01
In this commentary, the author defends the Multiple Grammars (MG) theory proposed by Luiz Amaral and Tom Roepe (A&R) in the present issue. Topics discussed include second language acquisition, the concept of developmental optionality, and the idea that structural decisions involve the lexical dimension. The author states that A&R's…
Omnivorous Representation Might Lead to Indigestion: Commentary on Amaral and Roeper
ERIC Educational Resources Information Center
Slabakova, Roumyana
2014-01-01
This article offers commentary that the Multiple Grammar (MG) language acquisition theory proposed by Luiz Amaral and Tom Roeper (A&R) in the present issue lacks elaboration of the psychological mechanisms at work in second language acquisition. Topics discussed include optionality in a speaker's grammar and the rules of verb position in…
Ji, Yongchang; Marinescu, Dan C.; Zhang, Wei; Zhang, Xing; Yan, Xiaodong; Baker, Timothy S.
2014-01-01
We present a model-based parallel algorithm for origin and orientation refinement for 3D reconstruction in cryoTEM. The algorithm is based upon the Projection Theorem of the Fourier Transform. Rather than projecting the current 3D model and searching for the best match between an experimental view and the calculated projections, the algorithm computes the Discrete Fourier Transform (DFT) of each projection and searches for the central section (“cut”) of the 3D DFT that best matches the DFT of the projection. Factors that affect the efficiency of a parallel program are first reviewed and then the performance and limitations of the proposed algorithm are discussed. The parallel program that implements this algorithm, called PO2R, has been used for the refinement of several virus structures, including those of the 500 Å diameter dengue virus (to 9.5 Å resolution), the 850 Å mammalian reovirus (to better than 7 Å), and the 1800 Å paramecium bursaria chlorella virus (to 15 Å). PMID:16459100
2013-03-01
applications [3, 21, 22], including antenna, microwave circuits , geophysics, optics, etc. The Ground Penetrating Radar (GPR) is a popular and...Räisänen. An efficient FDTD algorithm for the analysis of microstrip patch antennas printed on a general anisotropic dielectric substrate. IEEE...45, 1995. [17] S. Gedney, F. Lansing, and D. Rascoe. Full wave analysis of microwave monolithic circuit devices using a generalized Yee-algorithm
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; Evans, Gwyndaf; Wagner, Armin; Grimes, Jonathan M.; Sauter, Nicholas K.; Sutton, Geoff; Stuart, David Ian
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definition of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating the
A revised partiality model and post-refinement algorithm for X-ray free-electron laser data
Ginn, Helen Mary; Brewster, Aaron S.; Hattne, Johan; ...
2015-05-23
Research towards using X-ray free-electron laser (XFEL) data to solve structures using experimental phasing methods such as sulfur single-wavelength anomalous dispersion (SAD) has been hampered by shortcomings in the diffraction models for X-ray diffraction from FELs. Owing to errors in the orientation matrix and overly simple partiality models, researchers have required large numbers of images to converge to reliable estimates for the structure-factor amplitudes, which may not be feasible for all biological systems. Here, data for cytoplasmic polyhedrosis virus type 17 (CPV17) collected at 1.3 Å wavelength at the Linac Coherent Light Source (LCLS) are revisited. A previously published definitionmore » of a partiality model for reflections illuminated by self-amplified spontaneous emission (SASE) pulses is built upon, which defines a fraction between 0 and 1 based on the intersection of a reflection with a spread of Ewald spheres modelled by a super-Gaussian wavelength distribution in the X-ray beam. A method of post-refinement to refine the parameters of this model is suggested. This has generated a merged data set with an overall discrepancy (by calculating theRsplitvalue) of 3.15% to 1.46 Å resolution from a 7225-image data set. The atomic numbers of C, N and O atoms in the structure are distinguishable in the electron-density map. There are 13 S atoms within the 237 residues of CPV17, excluding the initial disordered methionine. These only possess 0.42 anomalous scattering electrons each at 1.3 Å wavelength, but the 12 that have single predominant positions are easily detectable in the anomalous difference Fourier map. It is hoped that these improvements will lead towards XFEL experimental phase determination and structure determination by sulfur SAD and will generally increase the utility of the method for difficult cases.« less
NASA Astrophysics Data System (ADS)
Bay, Annick; Mayer, Alexandre
2014-09-01
The efficiency of light-emitting diodes (LED) has increased significantly over the past few years, but the overall efficiency is still limited by total internal reflections due to the high dielectric-constant contrast between the incident and emergent media. The bioluminescent organ of fireflies gave incentive for light-extraction enhance-ment studies. A specific factory-roof shaped structure was shown, by means of light-propagation simulations and measurements, to enhance light extraction significantly. In order to achieve a similar effect for light-emitting diodes, the structure needs to be adapted to the specific set-up of LEDs. In this context simulations were carried out to determine the best geometrical parameters. In the present work, the search for a geometry that maximizes the extraction of light has been conducted by using a genetic algorithm. The idealized structure considered previously was generalized to a broader variety of shapes. The genetic algorithm makes it possible to search simultaneously over a wider range of parameters. It is also significantly less time-consuming than the previous approach that was based on a systematic scan on parameters. The results of the genetic algorithm show that (1) the calculations can be performed in a smaller amount of time and (2) the light extraction can be enhanced even more significantly by using optimal parameters determined by the genetic algorithm for the generalized structure. The combination of the genetic algorithm with the Rigorous Coupled Waves Analysis method constitutes a strong simulation tool, which provides us with adapted designs for enhancing light extraction from light-emitting diodes.
Kato, Rodrigo B; Silva, Frederico T; Pappa, Gisele L; Belchior, Jadson C
2015-01-28
We report the use of genetic algorithms (GA) as a method to refine force field parameters in order to determine RNA energy. Quantum-mechanical (QM) calculations are carried out for the isolated canonical ribonucleosides (adenosine, guanosine, cytidine and uridine) that are taken as reference data. In this particular study, the dihedral and electrostatic energies are reparametrized in order to test the proposed approach, i.e., GA coupled with QM calculations. Overall, RMSE comparison with recent published results for ribonucleosides energies shows an improvement, on average, of 50%. Finally, the new reparametrized potential energy function is used to determine the spatial structure of RNA (PDB code ) that was not taken into account in the parametrization process. This structure was improved about 82% comparable with previously published results.
Detection and Tracking Algorithm Refinement.
1981-10-01
Pulse pair width for gate 762 2351-2368 Not used 34 RAW DOPPLER FORMAT 19/9 NORMAN DOPPLFR I :e -tRocirds (Low PRF, Ch,mnel A) lo i t iorn Contents 1...mode, thereby accomodating arbitrary changes in PRF, integrator type or scan geometry. A revised output format provides a sorted hierarchical list of...OF SUBROUTINE INPARM 22 APPENDIX C - OUTPUT FILE WORD FORMAT 23 APPENDIX D - RADAR DATA FORMATS 31 APPENDIX E - GLOSSARY 38 APPENDIX F - LISTING OF
NASA Astrophysics Data System (ADS)
Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.
2015-09-01
The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.
A multiscale hybrid algorithm for fluctuating hydrodynamics
NASA Astrophysics Data System (ADS)
Williams, Sarah Anne
We develop an algorithmic hybrid for simulating multiscale fluid flow with microscopic fluctuations. Random fluctuations occur in fluids at microscopic scales, and these microscopic fluctuations can lead to macroscopic system effects. For example, in the Rayleigh-Taylor problem, where a relatively heavy gas sits on top of a relatively light gas, spontaneous microscopic fluctuation at the interface of the gases leads to turbulent mixing. Given near-term computational power, the physical and temporal domain on which these systems can be studied using traditional particle simulations is extremely limited. Therefore, we seek algorithmic solutions to increase the effective computing power available to study such problems. We develop an explicit numerical solver for the Landau-Lifshitz Navier-Stokes (LLNS) equations, which incorporate thermal fluctuations into macroscopic hydrodynamics via stochastic; fluxes. A major goal is to correctly preserve the influence of the microscopic fluctuations on the behavior of the system. We show that several classical approaches fail to accurately reproduce fluctuations in energy or density, and we introduce a customized conservative centered scheme with a third-order Runge-Kutta temporal integrator that is specficially designed to produce correct fluctuations in all conserved quantities. We then use the adaptive mesh and algorithm refinement (AMAR) paradigm to create a multiscale hybrid method by coupling our LLNS solver with the direct simulation Monte Carlo (DSMC) particle method. We present numerical tests of systems in and out of equilibrium, including time-dependent systems, and demonstrate dynamic adaptive refinement. Mean system behavior and second moment statistics of our simulations match theoretical values and benchmarks well. We find that particular attention should be paid to the spectrum of the flux at the interface between the particle and continuum methods, specifically at non-hydrodynamic time scales. As an extension of
NASA Astrophysics Data System (ADS)
Frey, F. A.; Walker, N.; Stakes, D.; Hart, S. R.; Nielsen, R.
1993-03-01
The axial valley of the Mid-Atlantic Ridge from 36° to 37°N was intensively sampled by submersible during the FAMOUS and AMAR projects. Our research focussed on the compositional and isotopic characteristics of basaltic glasses from the AMAR valley and the NARROWGATE region of the FAMOUS valley. These basaltic glasses are characterized by: (1) major element abundance trends that are consistent with control by multiphase fractionation (olivine, plagioclase and clinopyroxene) and magma mixing, (2) near isotopic homogeneity δ 18O= 5.2to6.4 , 87Sr/ 86Sr= 0.70288to0.70299 and 206Pb/ 204Pb= 18.57to18.84 , and (3) a wide range of incompatible element abundance ratios; e.g., within the AMAR valley chondrite-normalized La/Sm ranges from 0.7 to 1.5 and La/Yb from 0.6 to 1.6. These ratios increase with decreasing MgO content. Because of the limited variations in isotopic ratios of Sr, Nd and Pb, it is plausible that these compositional variations reflect post-melting magmatic processes. However, it is not possible to explain the correlated variation in MgO content and incompatible element abundance ratios, such as La/Sm and Zr/Nb, by fractional crystallization or more complex processes such as boundary layer fractionation. These geochemical trends can be explained by mixing of parental magmas that formed by very different extents of melting. In particular, the factor of three variation in Ce content in samples with ˜ 2.1% Na 2O and 8% MgO requires a component derived by < 1% melting. If the large variations in abundance ratios of incompatible elements reflect the melting process, a large, long-lived magma chamber was not present during eruption of these AMAR lavas. The geological characteristics of the AMAR valley and the compositions of AMAR lavas are consistent with episodic volcanism; i.e., periods of magma eruption were followed by extensive periods of tectonism with little or no magmatism.
NASA Astrophysics Data System (ADS)
Ragusa, Maria Alessandra; Russo, Giulia
2016-07-01
Ben Amar and Bianca valuably reviewed the state of the art of fibrosis modeling approach scenario [1]. Each paragraph identifies and examines a specific theoretical tool according to their scale level (molecular, cellular or tissue). For each of them it is shown the area of application, along with a clear description of strong and weak points. This critical analysis denotes the necessity to develop a more suitable and original multiscale approach in the future [2].
NASA Astrophysics Data System (ADS)
Bougault, H.; Aballéa, M.; Radford-Knoery, J.; Charlou, J. L.; Baptiste, P. Jean; Appriou, P.; Needham, H. D.; German, C.; Miranda, M.
1998-09-01
Dynamic hydrocast experiments enabled Mn (TDM), CH 4 concentrations and δ3He ratio to be recorded through vertical cross-sections of hydrothermal plumes along the FAMOUS segment and the southern part of the AMAR segment on the Mid-Atlantic Ridge between 36°N and 37°N. Mn, CH 4 and δ3He figures all along both segments are well above the seawater background in the open ocean: they are interpreted to be the result of time-integrated hydrothermal discharges dispersed and mixed in a closed basin delineated by the rift valley and the segment ends. Hydrothermal activity along the FAMOUS and AMAR segments appears to be similar. A comparison of the residence times of the three tracers from the dispersed, time-integrated signals is proposed. Although the background values in these closed basin are high, some proximal and (or) large hydrothermal inputs, overprinted on the general time-integrated plume, can be detected (i.e. the Rainbow site south of AMAR). Based on the depth and the location of plumes, hydrothermal activity is not, by far, limited to the neo-volcanic inner floor of the valley and should involve the walls and complex offsets of the rift valley. Considering the Mn and CH 4 concentrations in these plumes, two types of ocean-mantle interaction may be represented: hot, focused discharges on ultramafic exposures (Rainbow site) and low-temperature diffuse serpentinisation.
Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
NAFTA opportunities: Petroleum refining
Not Available
1993-01-01
The North American Free Trade Agreement (NAFTA) creates a more transparent environment for the sale of refined petroleum products to Mexico, and locks in access to Canada's relatively open market for these products. Canada and Mexico are sizable United States export markets for refined petroleum products, with exports of $556 million and $864 million, respectively, in 1992. These markets represent approximately 24 percent of total U.S. exports of these goods.
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
Wood, A.; Cornitius, T.
1997-06-11
The U.S.Refining Industry is facing hard times. Slow growth, tough environmental regulations, and fierce competition - especially in retail gasoline - have squeezed margins and prompted a series of mergers and acquisitions. The trend has affected the smallest and largest players, and a series of transactions over the past two years has created a new industry lineup. Among the larger companies, Mobil and Amoco are the latest to consider a refining merger. That follows recent plans by Ashland and Marathon to merge their refining businesses, and the decision by Shell, Texaco, and Saudi Aramco to combine some U.S. operations. Many of the leading independent refiners have increased their scale by acquiring refinery capacity. With refining still in the doldrums, more independents are taking a closer look at boosting production of petrochemicals, which offer high growth and, usually, better margins. That is being helped by the shift to refinery processes that favor the increased production of light olefins for alkylation and the removal of aromatics, providing opportunity to extract these materials for the petrochemical market. 5 figs., 3 tabs.
Not Available
1991-02-28
The U.S. refining sector has been whipped into high-speed decisions since the invasion of Kuwait last Summer, and its flexibility has been severely tested -- especially in the area of pricing. This issue shows facets of the roller-coaster ride such as crude oil costs, product values, and resulting margins. This issue also contains the following: (1) the ED Refining Netback Data Series for the U.S. Gulf and West Coasts, Rotterdam, and Singapore as of Feb. 22, 1991; and (2) the ED Fuel Price/Tax Series for countries of the Eastern Hemisphere, Feb. 1991 edition. 4 figs., 5 tabs.
NASA Technical Reports Server (NTRS)
Flemings, M. C.; Szekely, J.
1982-01-01
The relationship between fluid flow phenomena, nucleation, and grain refinement in solidifying metals both in the presence and in the absence of a gravitational field was investigated. The reduction of grain size in hard-to-process melts; the effects of undercooling on structure in solidification processes, including rapid solidification processing; and control of this undercooling to improve structures of solidified melts are considered. Grain refining and supercooling thermal modeling of the solidification process, and heat and fluid flow phenomena in the levitated metal droplets are described.
Choices, Frameworks and Refinement
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter
1991-01-01
In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.
REFINING FLUORINATED COMPOUNDS
Linch, A.L.
1963-01-01
This invention relates to the method of refining a liquid perfluorinated hydrocarbon oil containing fluorocarbons from 12 to 28 carbon atoms per molecule by distilling between 150 deg C and 300 deg C at 10 mm Hg absolute pressure. The perfluorinated oil is washed with a chlorinated lower aliphatic hydrocarbon, which mairtains a separate liquid phase when mixed with the oil. Impurities detrimental to the stability of the oil are extracted by the chlorinated lower aliphatic hydrocarbon. (AEC)
Greco, N.P.
1984-04-17
There is disclosed a process for removing tar bases and neutral oils from the Lurgi tar acids by treating the tar acids with aqueous sodium bisulfate to change the tar bases to salts and to hydrolyze the neutral oils to hydrolysis products and distilling the tar acids to obtain refined tar acid as the distillate while the tar base salts and neutral oil hydrolysis products remain as residue.
NASA Technical Reports Server (NTRS)
Ma, Chopo
2004-01-01
Since the ICRF was generated in 1995, VLBI modeling and estimation, data quality: source position stability analysis, and supporting observational programs have improved markedly. There are developing and potential applications in the areas of space navigation Earth orientation monitoring and optical astrometry from space that would benefit from a refined ICRF with enhanced accuracy, stability and spatial distribution. The convergence of analysis, focused observations, and astrometric needs should drive the production of a new realization in the next few years.
Refining retinoids with heteroatoms.
Benbrook, D M
2002-06-01
Retinoids are a group of synthetic compounds designed to refine the numerous biological activities of retinoic acid into pharmaceuticals for several diseases, including cancer. Designs that conformationally-restricted the rotation of the structures resulted in arotinoids that were biologically active, but with increased toxicity. Incorporation of a heteroatom in one cyclic ring of the arotinoid structures drastically reduced the toxicity, while retaining biological activity. Clinical trials of a heteroarotinoid, Tazarotene, confirmed the improved chemotherapeutic ratio (efficacy/toxicity).
Capelli, Silvia C; Bürgi, Hans-Beat; Dittrich, Birger; Grabowsky, Simon; Jayatilaka, Dylan
2014-09-01
Hirshfeld atom refinement (HAR) is a method which determines structural parameters from single-crystal X-ray diffraction data by using an aspherical atom partitioning of tailor-made ab initio quantum mechanical molecular electron densities without any further approximation. Here the original HAR method is extended by implementing an iterative procedure of successive cycles of electron density calculations, Hirshfeld atom scattering factor calculations and structural least-squares refinements, repeated until convergence. The importance of this iterative procedure is illustrated via the example of crystalline ammonia. The new HAR method is then applied to X-ray diffraction data of the dipeptide Gly-l-Ala measured at 12, 50, 100, 150, 220 and 295 K, using Hartree-Fock and BLYP density functional theory electron densities and three different basis sets. All positions and anisotropic displacement parameters (ADPs) are freely refined without constraints or restraints - even those for hydrogen atoms. The results are systematically compared with those from neutron diffraction experiments at the temperatures 12, 50, 150 and 295 K. Although non-hydrogen-atom ADPs differ by up to three combined standard uncertainties (csu's), all other structural parameters agree within less than 2 csu's. Using our best calculations (BLYP/cc-pVTZ, recommended for organic molecules), the accuracy of determining bond lengths involving hydrogen atoms from HAR is better than 0.009 Å for temperatures of 150 K or below; for hydrogen-atom ADPs it is better than 0.006 Å(2) as judged from the mean absolute X-ray minus neutron differences. These results are among the best ever obtained. Remarkably, the precision of determining bond lengths and ADPs for the hydrogen atoms from the HAR procedure is comparable with that from the neutron measurements - an outcome which is obtained with a routinely achievable resolution of the X-ray data of 0.65 Å.
Minimally refined biomass fuel
Pearson, Richard K.; Hirschfeld, Tomas B.
1984-01-01
A minimally refined fluid composition, suitable as a fuel mixture and derived from biomass material, is comprised of one or more water-soluble carbohydrates such as sucrose, one or more alcohols having less than four carbons, and water. The carbohydrate provides the fuel source; water solubilizes the carbohydrates; and the alcohol aids in the combustion of the carbohydrate and reduces the vicosity of the carbohydrate/water solution. Because less energy is required to obtain the carbohydrate from the raw biomass than alcohol, an overall energy savings is realized compared to fuels employing alcohol as the primary fuel.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
Refines Efficiency Improvement
WRI
2002-05-15
Refinery processes that convert heavy oils to lighter distillate fuels require heating for distillation, hydrogen addition or carbon rejection (coking). Efficiency is limited by the formation of insoluble carbon-rich coke deposits. Heat exchangers and other refinery units must be shut down for mechanical coke removal, resulting in a significant loss of output and revenue. When a residuum is heated above the temperature at which pyrolysis occurs (340 C, 650 F), there is typically an induction period before coke formation begins (Magaril and Aksenova 1968, Wiehe 1993). To avoid fouling, refiners often stop heating a residuum before coke formation begins, using arbitrary criteria. In many cases, this heating is stopped sooner than need be, resulting in less than maximum product yield. Western Research Institute (WRI) has developed innovative Coking Index concepts (patent pending) which can be used for process control by refiners to heat residua to the threshold, but not beyond the point at which coke formation begins when petroleum residua materials are heated at pyrolysis temperatures (Schabron et al. 2001). The development of this universal predictor solves a long standing problem in petroleum refining. These Coking Indexes have great potential value in improving the efficiency of distillation processes. The Coking Indexes were found to apply to residua in a universal manner, and the theoretical basis for the indexes has been established (Schabron et al. 2001a, 2001b, 2001c). For the first time, a few simple measurements indicates how close undesired coke formation is on the coke formation induction time line. The Coking Indexes can lead to new process controls that can improve refinery distillation efficiency by several percentage points. Petroleum residua consist of an ordered continuum of solvated polar materials usually referred to as asphaltenes dispersed in a lower polarity solvent phase held together by intermediate polarity materials usually referred to as
Thailand: refining cultural values.
Ratanakul, P
1990-01-01
In the second of a set of three articles concerned with "bioethics on the Pacific Rim," Ratanakul, director of a research center for Southeast Asian cultures in Thailand, provides an overview of bioethical issues in his country. He focuses on four issues: health care allocation, AIDS, determination of death, and euthanasia. The introduction of Western medicine into Thailand has brought with it a multitude of ethical problems created in part by tension between Western and Buddhist values. For this reason, Ratanakul concludes that "bioethical enquiry in Thailand must not only examine ethical dilemmas that arise in the actual practice of medicine and research in the life sciences, but must also deal with the refinement and clarification of applicable Thai cultural and moral values."
NASA Astrophysics Data System (ADS)
Napoli, Gaetano
2016-07-01
The term fibrosis refers to the development of fibrous connective tissue, in an organ or in a tissue, as a reparative response to injury or damage. The review article by Ben Amar and Bianca [1] proposes a unified multiscale approach for the modeling of fibrosis, accounting for phenomena occurring at different spatial scales (molecular, cellular and macroscopic). The main aim is to define a general unified framework able to describe the mechanisms, not yet completely understood, that trigger physiological and pathological fibrosis.
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
Ellis, J. S.; Sullivan, T. J.; Baskett, R. L.
1998-06-01
The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970's has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC's primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates release rates, mix of material, and release geometry and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3 prior to smoothing.
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Stacey, J.S.; Stoeser, D.B.; Greenwood, W.R.; Fischer, L.B.
1984-01-01
U/Pb zircon model ages for 11 major units from this region indicate three stages of evolution: 1) plate convergence, 2) plate collision and 3) post-orogenic intracratonic activity. Convergence occurred between the western Afif and eastern Ar Rayn plates that were separated by oceanic crust. Remnants of crust now comprise the ophiolitic complexes of the Urd group; the oldest plutonic unit studied is from one such complex, and gave an age of 694-698 m.y., while detrital zircons from an intercalated sedimentary formation were derived from source rocks with a mean age of 710 m.y. Plate convergence was terminated by collision of the two plates during the Al Amar orogeny which began at -670 m.y.; during collision, the Urd group rocks were deformed and in part obducted on to one or other of the plates. Synorogenic granitic rocks were intruded from 670 to 640 m.y., followed from 640 to 630 m.y. by unfoliated dioritic plutons emplaced in the Ar Rayn block.-R.A.H.
Crystal structure refinement with SHELXL
Sheldrick, George M.
2015-01-01
New features added to the refinement program SHELXL since 2008 are described and explained. The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as ‘a CIF’) containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Deformable complex network for refining low-resolution X-ray structures
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint with the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.
Monitoring, Controlling, Refining Communication Processes
ERIC Educational Resources Information Center
Spiess, John
1975-01-01
Because internal communications are essential to school system success, monitoring, controlling, and refining communicative processes have become essential activities for the chief school administrator. (Available from Buckeye Association of School Administrators, 750 Brooksedge Blvd., Westerville, Ohio 43081) (Author/IRT)
Crystal structure refinement with SHELXL.
Sheldrick, George M
2015-01-01
The improvements in the crystal structure refinement program SHELXL have been closely coupled with the development and increasing importance of the CIF (Crystallographic Information Framework) format for validating and archiving crystal structures. An important simplification is that now only one file in CIF format (for convenience, referred to simply as `a CIF') containing embedded reflection data and SHELXL instructions is needed for a complete structure archive; the program SHREDCIF can be used to extract the .hkl and .ins files required for further refinement with SHELXL. Recent developments in SHELXL facilitate refinement against neutron diffraction data, the treatment of H atoms, the determination of absolute structure, the input of partial structure factors and the refinement of twinned and disordered structures. SHELXL is available free to academics for the Windows, Linux and Mac OS X operating systems, and is particularly suitable for multiple-core processors.
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Refining the shifted topological vertex
Drissi, L. B.; Jehjouh, H.; Saidi, E. H.
2009-01-15
We study aspects of the refining and shifting properties of the 3d MacMahon function C{sub 3}(q) used in topological string theory and BKP hierarchy. We derive the explicit expressions of the shifted topological vertex S{sub {lambda}}{sub {mu}}{sub {nu}}(q) and its refined version T{sub {lambda}}{sub {mu}}{sub {nu}}(q,t). These vertices complete results in literature.
Error bounds from extra precise iterative refinement
Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason
2005-02-07
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.
NASA Astrophysics Data System (ADS)
Kolev, Mikhail K.
2016-07-01
Over the last decades the collaboration between scientists from biology, medicine and pharmacology on one side and scholars from mathematics, physics, mechanics and computer science on the other has led to better understanding of the properties of living systems, the mechanisms of their functioning and interactions with the environment and to the development of new therapies for various disorders and diseases. The target paper [1] by Ben Amar and Bianca presents a detailed description of the research methods and techniques used by biomathematicians, bioinformaticians, biomechanicians and biophysicists for studying biological systems, and in particular in the context of pathological fibrosis.
Automatic adaptive grid refinement for the Euler equations
NASA Technical Reports Server (NTRS)
Berger, M. J.; Jameson, A.
1983-01-01
A method of adaptive grid refinement for the solution of the steady Euler equations for transonic flow is presented. Algorithm automatically decides where the coarse grid accuracy is insufficient, and creates locally uniform refined grids in these regions. This typically occurs at the leading and trailing edges. The solution is then integrated to steady state using the same integrator (FLO52) in the interior of each grid. The boundary conditions needed on the fine grids are examined and the importance of treating the fine/coarse grid inerface conservatively is discussed. Numerical results are presented.
Model Refinement Using Eigensystem Assignment
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
2000-01-01
IA novel approach for the refinement of finite-element-based analytical models of flexible structures is presented. The proposed approach models the possible refinements in the mass, damping, and stiffness matrices of the finite element model in the form of a constant gain feedback with acceleration, velocity, and displacement measurements, respectively. Once the free elements of the structural matrices have been defined, the problem of model refinement reduces to obtaining position, velocity, and acceleration gain matrices with appropriate sparsity that reassign a desired subset of the eigenvalues of the model, along with partial mode shapes, from their baseline values to those obtained from system identification test data. A sequential procedure is used to assign one conjugate pair of eigenvalues at each step using symmetric output feedback gain matrices, and the eigenvectors are partially assigned, while ensuring that the eigenvalues assigned in the previous steps are not disturbed. The procedure can also impose that gain matrices be dissipative to guarantee the stability of the refined model. A numerical example, involving finite element model refinement for a structural testbed at NASA Langley Research Center (Controls-Structures-Interaction Evolutionary model) is presented to demonstrate the feasibility of the proposed approach.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Bauxite Mining and Alumina Refining
Frisch, Neale; Olney, David
2014-01-01
Objective: To describe bauxite mining and alumina refining processes and to outline the relevant physical, chemical, biological, ergonomic, and psychosocial health risks. Methods: Review article. Results: The most important risks relate to noise, ergonomics, trauma, and caustic soda splashes of the skin/eyes. Other risks of note relate to fatigue, heat, and solar ultraviolet and for some operations tropical diseases, venomous/dangerous animals, and remote locations. Exposures to bauxite dust, alumina dust, and caustic mist in contemporary best-practice bauxite mining and alumina refining operations have not been demonstrated to be associated with clinically significant decrements in lung function. Exposures to bauxite dust and alumina dust at such operations are also not associated with the incidence of cancer. Conclusions: A range of occupational health risks in bauxite mining and alumina refining require the maintenance of effective control measures. PMID:24806720
Error sensitivity to refinement: a criterion for optimal grid adaptation
NASA Astrophysics Data System (ADS)
Luchini, Paolo; Giannnetti, Flavio; Citro, Vincenzo
2016-11-01
Most indicators used for automatic grid refinement are suboptimal, in the sense that they do not really minimize the global solution error. This paper concerns with a new indicator, related to the sensitivity map of global stability problems, suitable for an optimal grid refinement that minimizes the global solution error. The new criterion is derived from the properties of the adjoint operator and provides a map of the sensitivity of the global error (or its estimate) to a local mesh refinement. Examples are presented for both a scalar partial differential equation and for the system of Navier-Stokes equations. In the last case, we also present a grid-adaptation algorithm based on the new estimator and on the FreeFem++ software that improves the accuracy of the solution of almost two order of magnitude by redistributing the nodes of the initial computational mesh.
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
Numerical solution of plasma fluid equations using locally refined grids
Colella, P., LLNL
1997-01-26
This paper describes a numerical method for the solution of plasma fluid equations on block-structured, locally refined grids. The plasma under consideration is typical of those used for the processing of semiconductors. The governing equations consist of a drift-diffusion model of the electrons and an isothermal model of the ions coupled by Poisson's equation. A discretization of the equations is given for a uniform spatial grid, and a time-split integration scheme is developed. The algorithm is then extended to accommodate locally refined grids. This extension involves the advancement of the discrete system on a hierarchy of levels, each of which represents a degree of refinement, together with synchronization steps to ensure consistency across levels. A brief discussion of a software implementation is followed by a presentation of numerical results.
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Method for refining contaminated iridium
Heshmatpour, B.; Heestand, R.L.
1982-08-31
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Method for refining contaminated iridium
Heshmatpour, Bahman; Heestand, Richard L.
1983-01-01
Contaminated iridium is refined by alloying it with an alloying agent selected from the group consisting of manganese and an alloy of manganese and copper, and then dissolving the alloying agent from the formed alloy to provide a purified iridium powder.
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Refining analgesia strategies using lasers.
Hampshire, Victoria
2015-08-01
Sound programs for the humane care and use of animals within research facilities incorporate experimental refinements such as multimodal approaches for pain management. These approaches can include non-traditional strategies along with more established ones. The use of lasers for pain relief is growing in popularity among companion animal veterinary practitioners and technologists. Therefore, its application in the research sector warrants closer consideration.
GRAIN REFINEMENT OF URANIUM BILLETS
Lewis, L.
1964-02-25
A method of refining the grain structure of massive uranium billets without resort to forging is described. The method consists in the steps of beta- quenching the billets, annealing the quenched billets in the upper alpha temperature range, and extrusion upset of the billets to an extent sufficient to increase the cross sectional area by at least 5 per cent. (AEC)
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
A parallel algorithm for the non-symmetric eigenvalue problem
Dongarra, J.; Sidani, M. |
1991-12-01
This paper describes a parallel algorithm for computing the eigenvalues and eigenvectors of a non-symmetric matrix. The algorithm is based on a divide-and-conquer procedure and uses an iterative refinement technique.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Coupling Kinetic and Hydrodynamic Models for Simulations of Gas Flows and Weakly Ionized Plasmas
NASA Astrophysics Data System (ADS)
Kolobov, V. I.; Arslanbekov, R. R.
2011-10-01
This paper presents adaptive kinetic/fluid models for simulations of gases and weakly ionized plasmas. We first describe a Unified Flow Solver (UFS), which combines Adaptive Mesh Refinement with automatic selection of kinetic or hydrodynamic models for different parts of flows. This Adaptive Mesh and Algorithm Refinement (AMAR) technique limits expensive atomistic-scale solutions only to the regions where they are needed. We present examples of plasma simulations with fluid models and describe kinetic solvers for electrons which are currently being incorporated into AMAR techniques for plasma simulations.
A Refined Cauchy-Schwarz Inequality
ERIC Educational Resources Information Center
Mercer, Peter R.
2007-01-01
The author presents a refinement of the Cauchy-Schwarz inequality. He shows his computations in which refinements of the triangle inequality and its reverse inequality are obtained for nonzero x and y in a normed linear space.
Reformulated Gasoline Market Affected Refiners Differently, 1995
1996-01-01
This article focuses on the costs of producing reformulated gasoline (RFG) as experienced by different types of refiners and on how these refiners fared this past summer, given the prices for RFG at the refinery gate.
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Firing of pulverized solvent refined coal
Derbidge, T. Craig; Mulholland, James A.; Foster, Edward P.
1986-01-01
An air-purged burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired without the coking thereof on the burner components. The air-purged burner is designed for the firing of pulverized solvent refined coal in a tangentially fired boiler.
Solidification Based Grain Refinement in Steels
2010-07-20
thermodynamics . 2) Experimental verify the effectiveness of possible nucleating compounds. 3) Extend grain refinement theory and solidification...knowledge through experimental data. 4) Determine structure property relationships for the examined grain refiners. 5) Formulate processing techniques for...using grain refiners in the steel casting industry. During Fiscal Year 2010, this project worked on determining structure property -relationships
Grain Refinement of Deoxidized Copper
NASA Astrophysics Data System (ADS)
Balart, María José; Patel, Jayesh B.; Gao, Feng; Fan, Zhongyun
2016-10-01
This study reports the current status of grain refinement of copper accompanied in particular by a critical appraisal of grain refinement of phosphorus-deoxidized, high residual P (DHP) copper microalloyed with 150 ppm Ag. Some deviations exist in terms of the growth restriction factor ( Q) framework, on the basis of empirical evidence reported in the literature for grain size measurements of copper with individual additions of 0.05, 0.1, and 0.5 wt pct of Mo, In, Sn, Bi, Sb, Pb, and Se, cast under a protective atmosphere of pure Ar and water quenching. The columnar-to-equiaxed transition (CET) has been observed in copper, with an individual addition of 0.4B and with combined additions of 0.4Zr-0.04P and 0.4Zr-0.04P-0.015Ag and, in a previous study, with combined additions of 0.1Ag-0.069P (in wt pct). CETs in these B- and Zr-treated casts have been ascribed to changes in the morphology and chemistry of particles, concurrently in association with free solute type and availability. No further grain-refining action was observed due to microalloying additions of B, Mg, Ca, Zr, Ti, Mn, In, Fe, and Zn (~0.1 wt pct) with respect to DHP-Cu microalloyed with Ag, and therefore are no longer relevant for the casting conditions studied. The critical microalloying element for grain size control in deoxidized copper and in particular DHP-Cu is Ag.
NASA Astrophysics Data System (ADS)
Guerrini, Luca
2016-07-01
Martine Ben Amar and Carlo Bianca have written a valuable paper [1], which is a timely review of the different theoretical tools for the modeling of physiological and pathological fibrosis existing in the literature. The review [1] is written with clarity and in a simple way, which makes it understandable to a wide audience. The author presents an exhaustive exposition of the interplay between the different scholars which works in the modeling of fibrosis diseases and a survey of the main theoretical approaches, among others, ODE-based models, PDE-based models, models with internal structure, mechanics of continuum approach, agent-based models. A critical analysis discusses their applicability, including advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Pappalardo, Francesco; Pennisi, Marzio
2016-07-01
Fibrosis represents a process where an excessive tissue formation in an organ follows the failure of a physiological reparative or reactive process. Mathematical and computational techniques may be used to improve the understanding of the mechanisms that lead to the disease and to test potential new treatments that may directly or indirectly have positive effects against fibrosis [1]. In this scenario, Ben Amar and Bianca [2] give us a broad picture of the existing mathematical and computational tools that have been used to model fibrotic processes at the molecular, cellular, and tissue levels. Among such techniques, agent based models (ABM) can give a valuable contribution in the understanding and better management of fibrotic diseases.
NASA Astrophysics Data System (ADS)
Kachapova, Farida
2016-07-01
Mathematical and computational models in biology and medicine help to improve diagnostics and medical treatments. Modeling of pathological fibrosis is reviewed by M. Ben Amar and C. Bianca in [4]. Pathological fibrosis is the process when excessive fibrous tissue is deposited on an organ or tissue during a wound healing and can obliterate their normal function. In [4] the phenomena of fibrosis are briefly explained including the causes, mechanism and management; research models of pathological fibrosis are described, compared and critically analyzed. Different models are suitable at different levels: molecular, cellular and tissue. The main goal of mathematical modeling of fibrosis is to predict long term behavior of the system depending on bifurcation parameters; there are two main trends: inhibition of fibrosis due to an active immune system and swelling of fibrosis because of a weak immune system.
NASA Astrophysics Data System (ADS)
Wu, Min
2016-07-01
The development of anti-fibrotic therapies in diversities of diseases becomes more and more urgent recently, such as in pulmonary, renal and liver fibrosis [1,2], as well as in malignant tumor growths [3]. As reviewed by Ben Amar and Bianca [4], various theoretical, experimental and in-silico models have been developed to understand the fibrosis process, where the implication on therapeutic strategies has also been frequently demonstrated (e.g., [5-7]). In [4], these models are analyzed and sorted according to their approaches, and in the end of [4], a unified multi-scale approach was proposed to understand fibrosis. While one of the major purposes of extensive modeling of fibrosis is to shed light on therapeutic strategies, the theoretical, experimental and in-silico studies of anti-fibrosis therapies should be conducted more intensively.
Crystallographic refinement of ligand complexes
Kleywegt, Gerard J.
2007-01-01
Model building and refinement of complexes between biomacromolecules and small molecules requires sensible starting coordinates as well as the specification of restraint sets for all but the most common non-macromolecular entities. Here, it is described why this is necessary, how it can be accomplished and what pitfalls need to be avoided in order to produce chemically plausible models of the low-molecular-weight entities. A number of programs, servers, databases and other resources that can be of assistance in the process are also discussed. PMID:17164531
2003-09-30
ALGORITHMS, * GLOBAL POSITIONING SYSTEM , *FILTER ANALYSIS, FREQUENCY, POSITION (LOCATION), COMMUNITIES, CLOCKS, DEMONSTRATIONS, AIRBORNE, KALMAN...FILTERING, ACCURACY, TIME, DRIFT, POSITION FINDING, MOBILE, ERRORS, SYNCHRONIZATION(ELECTRONICS), LABORATORIES, REFINING, BIAS.
Improved successive refinement for wavelet-based embedded image compression
NASA Astrophysics Data System (ADS)
Creusere, Charles D.
1999-10-01
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.
Refinement of the community detection performance by weighted relationship coupling
NASA Astrophysics Data System (ADS)
MIN, DONG; YU, KAI; LI, HUI-JIA
2017-03-01
The complexity of many community detection algorithms is usually an exponential function with the scale which hard to uncover community structure with high speed. Inspired by the ideas of the famous modularity optimization, in this paper, we proposed a proper weighting scheme utilizing a novel k-strength relationship which naturally represents the coupling distance between two nodes. Community structure detection using a generalized weighted modularity measure is refined based on the weighted k-strength matrix. We apply our algorithm on both the famous benchmark network and the real networks. Theoretical analysis and experiments show that the weighted algorithm can uncover communities fast and accurately and can be easily extended to large-scale real networks.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Materials refining on the Moon
NASA Astrophysics Data System (ADS)
Landis, Geoffrey A.
2007-05-01
Oxygen, metals, silicon, and glass are raw materials that will be required for long-term habitation and production of structural materials and solar arrays on the Moon. A process sequence is proposed for refining these materials from lunar regolith, consisting of separating the required materials from lunar rock with fluorine. The fluorine is brought to the Moon in the form of potassium fluoride, and is liberated from the salt by electrolysis in a eutectic salt melt. Tetrafluorosilane produced by this process is reduced to silicon by a plasma reduction stage; the fluorine salts are reduced to metals by reaction with metallic potassium. Fluorine is recovered from residual MgF and CaF2 by reaction with K2O.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Silicon refinement by chemical vapor transport
NASA Technical Reports Server (NTRS)
Olson, J.
1984-01-01
Silicon refinement by chemical vapor transport is discussed. The operating characteristics of the purification process, including factors affecting the rate, purification efficiency and photovoltaic quality of the refined silicon were studied. The casting of large alloy plates was accomplished. A larger research scale reactor is characterized, and it is shown that a refined silicon product yields solar cells with near state of the art conversion efficiencies.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
1988 worldwide refining and gas processing directory
Not Available
1987-01-01
Innumerable revisions in names, addresses, phone numbers, telex numbers, and cable numbers have been made since the publication of the previous edition. This directory also contains several of the most vital and informative surveys of the petroleum industry including the U.S. Refining Survey, The Worldwide Construction Survey in Refining, Sulfur, Gas Processing and Related Fuels, the Worldwide Refining and Gas Processing Survey, the Worldwide Catalyst Report, and the U.S. and Canadian Lube and Eax Capacities Report from the National Petroleum Refiner's Association.
3D Continuum-Particle Simulations for Multiscale Hydrodynamics
NASA Astrophysics Data System (ADS)
Wijesinghe, Sanith; Hornung, Richard; Garcia, Alejandro; Hadjiconstantinou, Nicolas
2001-06-01
An adaptive mesh and algorithmic refinement (AMAR) scheme to model multi-scale, continuum-particle hydrodynamic flows is presented. AMAR ensures the particle description is applied exclusively in regions with high flow gradients and discontinous material interfaces, i.e. regions where the continuum flow assumptions are typically invalid. Direct Simulation Monte Carlo (DSMC) is used to model the particle regions on the finest grid of the adaptive hierarchy. The continuum flow is modelled using the compressible flow Euler equations and is solved using a second order Godunov scheme. Coupling is achieved by conservation of fluxes across the continuum-particle grid boundaries. The AMAR data structures are supported by a C++ object oriented framework (Structured Adaptive Mesh Refinement Application Infrastructure - SAMRAI) which allows for efficient parallel implementation. The scheme also extends to simulations of gas mixtures. Results for test cases are compared with theory and experiment.
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
Refining the shallow slip deficit
NASA Astrophysics Data System (ADS)
Xu, Xiaohua; Tong, Xiaopeng; Sandwell, David T.; Milliner, Christopher W. D.; Dolan, James F.; Hollingsworth, James; Leprince, Sebastien; Ayoub, Francois
2016-03-01
Geodetic slip inversions for three major (Mw > 7) strike-slip earthquakes (1992 Landers, 1999 Hector Mine and 2010 El Mayor-Cucapah) show a 15-60 per cent reduction in slip near the surface (depth < 2 km) relative to the slip at deeper depths (4-6 km). This significant difference between surface coseismic slip and slip at depth has been termed the shallow slip deficit (SSD). The large magnitude of this deficit has been an enigma since it cannot be explained by shallow creep during the interseismic period or by triggered slip from nearby earthquakes. One potential explanation for the SSD is that the previous geodetic inversions lack data coverage close to surface rupture such that the shallow portions of the slip models are poorly resolved and generally underestimated. In this study, we improve the static coseismic slip inversion for these three earthquakes, especially at shallow depths, by: (1) including data capturing the near-fault deformation from optical imagery and SAR azimuth offsets; (2) refining the interferometric synthetic aperture radar processing with non-boxcar phase filtering, model-dependent range corrections, more complete phase unwrapping by SNAPHU (Statistical Non-linear Approach for Phase Unwrapping) assuming a maximum discontinuity and an on-fault correlation mask; (3) using more detailed, geologically constrained fault geometries and (4) incorporating additional campaign global positioning system (GPS) data. The refined slip models result in much smaller SSDs of 3-19 per cent. We suspect that the remaining minor SSD for these earthquakes likely reflects a combination of our elastic model's inability to fully account for near-surface deformation, which will render our estimates of shallow slip minima, and potentially small amounts of interseismic fault creep or triggered slip, which could `make up' a small percentages of the coseismic SSD during the interseismic period. Our results indicate that it is imperative that slip inversions include
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
i3Drefine software for protein 3D structure refinement and its assessment in CASP10.
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.
Anomalies in the refinement of isoleucine
Berntsen, Karen R. M.; Vriend, Gert
2014-04-01
The side-chain torsion angles of isoleucines in X-ray protein structures are a function of resolution, secondary structure and refinement software. Detailing the standard torsion angles used in refinement software can improve protein structure refinement. A study of isoleucines in protein structures solved using X-ray crystallography revealed a series of systematic trends for the two side-chain torsion angles χ{sub 1} and χ{sub 2} dependent on the resolution, secondary structure and refinement software used. The average torsion angles for the nine rotamers were similar in high-resolution structures solved using either the REFMAC, CNS or PHENIX software. However, at low resolution these programs often refine towards somewhat different χ{sub 1} and χ{sub 2} values. Small systematic differences can be observed between refinement software that uses molecular dynamics-type energy terms (for example CNS) and software that does not use these terms (for example REFMAC). Detailing the standard torsion angles used in refinement software can improve the refinement of protein structures. The target values in the molecular dynamics-type energy functions can also be improved.
Nose tip refinement using interdomal suture in caucasian nose
Pasinato, Rogério; Mocelin, Marcos; Berger, Cezar Augusto Sarraf
2012-01-01
Summary Introduction: Refinement of the nose tip can be accomplished by a variety of techniques, but currently, the use of sutures in the nasal tip with conservative resection of the alar cartilage is the most frequently recommended approach. Objective: To classify the nasal tip and to demonstrate the interdomal suture applied to nasal tip refinement in the Caucasian nose, as well as to provide a simple and practical presentation of the surgical steps. Method: Development of surgical algorithm for nasal tip surgery: 1. Interdomal suture (double binding suture), 2. Interdomal suture with alar cartilage weakening (cross-hatching), 3. Interdomal suture with cephalic removal of the alar cartilage (McIndoe technique) based on the nasal tip type classification. This classification assesses the interdomal distance (angle of domal divergence and intercrural distance), domal arch width, cartilage consistency, and skin type. Interdomal suture is performed through endonasal rhinoplasty by basic technique without delivery (Converse-Diamond technique) under local anesthesia. Conclusion: This classification is simple and facilitates the approach of surgical treatment of the nasal tip through interdomal suture, systematizing and standardizing surgical maneuvers for better refinement of the Caucasian nose. PMID:25991963
A refined wideband acoustical holography based on equivalent source method
Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang
2017-01-01
This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band. PMID:28266531
A refined wideband acoustical holography based on equivalent source method
NASA Astrophysics Data System (ADS)
Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang
2017-03-01
This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band.
Workshop on algorithms for macromolecular modeling. Final project report, June 1, 1994--May 31, 1995
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
1995-07-01
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
Refined Monte Carlo method for simulating angle-dependent partial frequency redistributions
NASA Technical Reports Server (NTRS)
Lee, J.-S.
1982-01-01
A refined algorithm for generating emission frequencies from angle-dependent partial frequency redistribution functions R sub II and R sub III is described. The improved algorithm has as its basis a 'rejection' technique that, for absorption frequencies x less than 5, involves no approximations. The resulting procedure is found to be essential for effective studies of radiative transfer in optically thick or temperature varying media involving angle-dependent partial frequency redistributions.
North Dakota Refining Capacity Study
Dennis Hill; Kurt Swenson; Carl Tuura; Jim Simon; Robert Vermette; Gilberto Marcha; Steve Kelly; David Wells; Ed Palmer; Kuo Yu; Tram Nguyen; Juliam Migliavacca
2011-01-05
According to a 2008 report issued by the United States Geological Survey, North Dakota and Montana have an estimated 3.0 to 4.3 billion barrels of undiscovered, technically recoverable oil in an area known as the Bakken Formation. With the size and remoteness of the discovery, the question became 'can a business case be made for increasing refining capacity in North Dakota?' And, if so what is the impact to existing players in the region. To answer the question, a study committee comprised of leaders in the region's petroleum industry were brought together to define the scope of the study, hire a consulting firm and oversee the study. The study committee met frequently to provide input on the findings and modify the course of the study, as needed. The study concluded that the Petroleum Area Defense District II (PADD II) has an oversupply of gasoline. With that in mind, a niche market, naphtha, was identified. Naphtha is used as a diluent used for pipelining the bitumen (heavy crude) from Canada to crude markets. The study predicted there will continue to be an increase in the demand for naphtha through 2030. The study estimated the optimal configuration for the refinery at 34,000 barrels per day (BPD) producing 15,000 BPD of naphtha and a 52 percent refinery charge for jet and diesel yield. The financial modeling assumed the sponsor of a refinery would invest its own capital to pay for construction costs. With this assumption, the internal rate of return is 9.2 percent which is not sufficient to attract traditional investment given the risk factor of the project. With that in mind, those interested in pursuing this niche market will need to identify incentives to improve the rate of return.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement
Antz, Hartwig; Luszczek, Piotr; Dongarra, Jack; Heuveline, Vinent
2011-12-14
In hardware-aware high performance computing, block- asynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficient usage of the provided computing power. In this paper, we want to analyze the potential of combining both techniques. Therefore, we implement a mixed precision iterative refinement algorithm using a block-asynchronous iteration as an error correction solver, and compare its performance with a pure implementation of a block-asynchronous iteration and an iterative refinement method using double precision for the error correction solver. For matrices from theUniversity of FloridaMatrix collection,we report the convergence behaviour and provide the total solver runtime using different GPU architectures.
Protein NMR structures refined without NOE data.
Ryu, Hyojung; Kim, Tae-Rae; Ahn, SeonJoo; Ji, Sunyoung; Lee, Jinhyuk
2014-01-01
The refinement of low-quality structures is an important challenge in protein structure prediction. Many studies have been conducted on protein structure refinement; the refinement of structures derived from NMR spectroscopy has been especially intensively studied. In this study, we generated flat-bottom distance potential instead of NOE data because NOE data have ambiguity and uncertainty. The potential was derived from distance information from given structures and prevented structural dislocation during the refinement process. A simulated annealing protocol was used to minimize the potential energy of the structure. The protocol was tested on 134 NMR structures in the Protein Data Bank (PDB) that also have X-ray structures. Among them, 50 structures were used as a training set to find the optimal "width" parameter in the flat-bottom distance potential functions. In the validation set (the other 84 structures), most of the 12 quality assessment scores of the refined structures were significantly improved (total score increased from 1.215 to 2.044). Moreover, the secondary structure similarity of the refined structure was improved over that of the original structure. Finally, we demonstrate that the combination of two energy potentials, statistical torsion angle potential (STAP) and the flat-bottom distance potential, can drive the refinement of NMR structures.
Refining of metallurgical-grade silicon
NASA Technical Reports Server (NTRS)
Dietl, J.
1986-01-01
A basic requirement of large scale solar cell fabrication is to provide low cost base material. Unconventional refining of metallurical grade silicon represents one of the most promising ways of silicon meltstock processing. The refining concept is based on an optimized combination of metallurgical treatments. Commercially available crude silicon, in this sequence, requires a first pyrometallurgical step by slagging, or, alternatively, solvent extraction by aluminum. After grinding and leaching, high purity qualtiy is gained as an advanced stage of refinement. To reach solar grade quality a final pyrometallurgical step is needed: liquid-gas extraction.
Firing of pulverized solvent refined coal
Lennon, Dennis R.; Snedden, Richard B.; Foster, Edward P.; Bellas, George T.
1990-05-15
A burner for the firing of pulverized solvent refined coal is constructed and operated such that the solvent refined coal can be fired successfully without any performance limitations and without the coking of the solvent refined coal on the burner components. The burner is provided with a tangential inlet of primary air and pulverized fuel, a vaned diffusion swirler for the mixture of primary air and fuel, a center water-cooled conical diffuser shielding the incoming fuel from the heat radiation from the flame and deflecting the primary air and fuel steam into the secondary air, and a watercooled annulus located between the primary air and secondary air flows.
Towards adaptive kinetic-fluid simulations of low-temperature plasmas
NASA Astrophysics Data System (ADS)
Kolobov, Vladimir
2013-09-01
The emergence of new types of gaseous electronics in multi-phase systems calls for computational tools with adaptive kinetic-fluid simulation capabilities. We will present an Adaptive Mesh and Algorithm Refinement (AMAR) methodology for multi-scale simulations of gas flows and discuss current efforts towards extending this methodology for weakly ionized plasmas. The AMAR method combines Adaptive Mesh Refinement (AMR) with automatic selection of kinetic or fluid solvers in different parts of computational domains. This AMAR methodology was implemented in our Unified Flow Solver (UFS) for mixed rarefied and continuum flows. UFS uses discrete velocity method for solving Boltzmann kinetic equation under rarefied flow conditions coupled to fluid (Navier-Stokes) solvers for continuum flow regimes. The main challenge of extending AMAR to plasmas comes from the distinction of electron and atom mass. We will present multi-fluid, two-temperature plasma models with AMR capabilities for simulations of glow, corona, and streamer discharges. We will briefly discuss specifics of electron kinetics in collisional plasmas, and deterministic methods of solving kinetic equations for different electron groups. Kinetic solvers with Adaptive Mesh in Phase Space (AMPS) will be introduced to solve Boltzmann equation for electrons in the presence of electric fields, elastic and inelastic collisions with atoms. These kinetic and fluid models are currently being incorporated into AMAR methodology for multi-scale simulations of low-temperature plasmas in multi-phase systems. Supported by AFOSR, NASA, and DoE
Refined Phenotyping of Modic Changes
Määttä, Juhani H.; Karppinen, Jaro; Paananen, Markus; Bow, Cora; Luk, Keith D.K.; Cheung, Kenneth M.C.; Samartzis, Dino
2016-01-01
. The strength of the associations increased with the number of MC. This large-scale study is the first to definitively note MC types and specific morphologies to be independently associated with prolonged severe LBP and back-related disability. This proposed refined MC phenotype may have direct implications in clinical decision-making as to the development and management of LBP. Understanding of these imaging biomarkers can lead to new preventative and personalized therapeutics related to LBP. PMID:27258491
1987 worldwide refining and gas processing directory
Not Available
1986-01-01
This book delineates an ever-varying aspect of the industry. Personnel names, plant sites, home office locations, sales and relocations - all have been compiled in this book. Inactive refineries have been updated and listed in a special section as well as active major refining and gas processing and construction projects worldwide. This directory also contains several of the most vital and informative surveys of the petroleum industry. It discusses the worldwide Construction Survey, U.S. Refining Survey, Worldwide Gas Processing Plant Survey, Worldwide Refining Survey, Worldwide Survey of Petroleum Derived Sulfur Production, and Worldwide Catalyst Report. Also included in the directory is the National Petroleum Refiners Association's U.S. and Canadian Lube and Wax Capacities Study.
U.S. Refining Capacity Utilization
1995-01-01
This article briefly reviews recent trends in domestic refining capacity utilization and examines in detail the differences in reported crude oil distillation capacities and utilization rates among different classes of refineries.
Refiners discuss HF alkylation process and issues
Not Available
1992-04-06
Safety and oxygenate operations made HF alkylation a hot topic of discussion at the most recent National Petroleum Refiners Association annual question and answer session on refining and petrochemical technology. This paper provides answers to a variety of questions regarding the mechanical, process, and safety aspects of the HF alkylation process. Among the issues discussed were mitigation techniques, removal of oxygenates from alkylation unit feed, and amylene alkylation.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics
Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506
Refinement of protein structures in explicit solvent.
Linge, Jens P; Williams, Mark A; Spronk, Christian A E M; Bonvin, Alexandre M J J; Nilges, Michael
2003-02-15
We present a CPU efficient protocol for refinement of protein structures in a thin layer of explicit solvent and energy parameters with completely revised dihedral angle terms. Our approach is suitable for protein structures determined by theoretical (e.g., homology modeling or threading) or experimental methods (e.g., NMR). In contrast to other recently proposed refinement protocols, we put a strong emphasis on consistency with widely accepted covalent parameters and computational efficiency. We illustrate the method for NMR structure calculations of three proteins: interleukin-4, ubiquitin, and crambin. We show a comparison of their structure ensembles before and after refinement in water with and without a force field energy term for the dihedral angles; crambin was also refined in DMSO. Our results demonstrate the significant improvement of structure quality by a short refinement in a thin layer of solvent. Further, they show that a dihedral angle energy term in the force field is beneficial for structure calculation and refinement. We discuss the optimal weight for the energy constant for the backbone angle omega and include an extensive discussion of meaning and relevance of the calculated validation criteria, in particular root mean square Z scores for covalent parameters such as bond lengths.
Structure refinement from precession electron diffraction data.
Palatinus, Lukáš; Jacob, Damien; Cuvillier, Priscille; Klementová, Mariana; Sinkler, Wharton; Marks, Laurence D
2013-03-01
Electron diffraction is a unique tool for analysing the crystal structures of very small crystals. In particular, precession electron diffraction has been shown to be a useful method for ab initio structure solution. In this work it is demonstrated that precession electron diffraction data can also be successfully used for structure refinement, if the dynamical theory of diffraction is used for the calculation of diffracted intensities. The method is demonstrated on data from three materials - silicon, orthopyroxene (Mg,Fe)(2)Si(2)O(6) and gallium-indium tin oxide (Ga,In)(4)Sn(2)O(10). In particular, it is shown that atomic occupancies of mixed crystallographic sites can be refined to an accuracy approaching X-ray or neutron diffraction methods. In comparison with conventional electron diffraction data, the refinement against precession diffraction data yields significantly lower figures of merit, higher accuracy of refined parameters, much broader radii of convergence, especially for the thickness and orientation of the sample, and significantly reduced correlations between the structure parameters. The full dynamical refinement is compared with refinement using kinematical and two-beam approximations, and is shown to be superior to the latter two.
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids
NASA Technical Reports Server (NTRS)
deFainchtein, Rosalinda
1996-01-01
This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Refining a relativistic, hydrodynamic solver: Admitting ultra-relativistic flows
NASA Astrophysics Data System (ADS)
Bernstein, J. P.; Hughes, P. A.
2009-09-01
We have undertaken the simulation of hydrodynamic flows with bulk Lorentz factors in the range 102-106. We discuss the application of an existing relativistic, hydrodynamic primitive variable recovery algorithm to a study of pulsar winds, and, in particular, the refinement made to admit such ultra-relativistic flows. We show that an iterative quartic root finder breaks down for Lorentz factors above 102 and employ an analytic root finder as a solution. We find that the former, which is known to be robust for Lorentz factors up to at least 50, offers a 24% speed advantage. We demonstrate the existence of a simple diagnostic allowing for a hybrid primitives recovery algorithm that includes an automatic, real-time toggle between the iterative and analytical methods. We further determine the accuracy of the iterative and hybrid algorithms for a comprehensive selection of input parameters and demonstrate the latter’s capability to elucidate the internal structure of ultra-relativistic plasmas. In particular, we discuss simulations showing that the interaction of a light, ultra-relativistic pulsar wind with a slow, dense ambient medium can give rise to asymmetry reminiscent of the Guitar nebula leading to the formation of a relativistic backflow harboring a series of internal shockwaves. The shockwaves provide thermalized energy that is available for the continued inflation of the PWN bubble. In turn, the bubble enhances the asymmetry, thereby providing positive feedback to the backflow.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.
Zeolites as catalysts in oil refining.
Primo, Ana; Garcia, Hermenegildo
2014-11-21
Oil is nowadays the main energy source and this prevalent position most probably will continue in the next decades. This situation is largely due to the degree of maturity that has been achieved in oil refining and petrochemistry as a consequence of the large effort in research and innovation. The remarkable efficiency of oil refining is largely based on the use of zeolites as catalysts. The use of zeolites as catalysts in refining and petrochemistry has been considered as one of the major accomplishments in the chemistry of the XXth century. In this tutorial review, the introductory part describes the main features of zeolites in connection with their use as solid acids. The main body of the review describes important refining processes in which zeolites are used including light naphtha isomerization, olefin alkylation, reforming, cracking and hydrocracking. The final section contains our view on future developments in the field such as the increase in the quality of the transportation fuels and the coprocessing of increasing percentage of biofuels together with oil streams. This review is intended to provide the rudiments of zeolite science applied to refining catalysis.
Multidataset Refinement Resonant Diffraction, and Magnetic Structures
Attfield, J. Paul
2004-01-01
The scope of Rietveld and other powder diffraction refinements continues to expand, driven by improvements in instrumentation, methodology and software. This will be illustrated by examples from our research in recent years. Multidataset refinement is now commonplace; the datasets may be from different detectors, e.g., in a time-of-flight experiment, or from separate experiments, such as at several x-ray energies giving resonant information. The complementary use of x rays and neutrons is exemplified by a recent combined refinement of the monoclinic superstructure of magnetite, Fe3O4, below the 122 K Verwey transition, which reveals evidence for Fe2+/Fe3+ charge ordering. Powder neutron diffraction data continue to be used for the solution and Rietveld refinement of magnetic structures. Time-of-flight instruments on cold neutron sources can produce data that have a high intensity and good resolution at high d-spacings. Such profiles have been used to study incommensurate magnetic structures such as FeAsO4 and β–CrPO4. A multiphase, multidataset refinement of the phase-separated perovskite (Pr0.35Y0.07Th0.04Ca0.04Sr0.5)MnO3 has been used to fit three components with different crystal and magnetic structures at low temperatures. PMID:27366599
Software for Refining or Coarsening Computational Grids
NASA Technical Reports Server (NTRS)
Daines, Russell; Woods, Jody
2003-01-01
A computer program performs calculations for refinement or coarsening of computational grids of the type called structured (signifying that they are geometrically regular and/or are specified by relatively simple algebraic expressions). This program is designed to facilitate analysis of the numerical effects of changing structured grids utilized in computational fluid dynamics (CFD) software. Unlike prior grid-refinement and -coarsening programs, this program is not limited to doubling or halving: the user can specify any refinement or coarsening ratio, which can have a noninteger value. In addition to this ratio, the program accepts, as input, a grid file and the associated restart file, which is basically a file containing the most recent iteration of flow-field variables computed on the grid. The program then refines or coarsens the grid as specified, while maintaining the geometry and the stretching characteristics of the original grid. The program can interpolate from the input restart file to create a restart file for the refined or coarsened grid. The program provides a graphical user interface that facilitates the entry of input data for the grid-generation and restart-interpolation routines.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... carried out at each location. (2) Crude oil capacity. (i) The total corporate crude oil capacity of each... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner? 80.1340 Section 80.1340 Protection of Environment ENVIRONMENTAL PROTECTION...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... January 1, 1999; and the type of business activities carried out at each location; or (ii) In the case...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... January 1, 1999; and the type of business activities carried out at each location; or (ii) In the case...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.1340 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... for small refiner status must be sent to: Attn: MSAT2 Benzene, Mail Stop 6406J, U.S. Environmental Protection Agency, 1200 Pennsylvania Ave., NW., Washington, DC 20460. For commercial delivery: MSAT2...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...
40 CFR 80.235 - How does a refiner obtain approval as a small refiner?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false How does a refiner obtain approval as a small refiner? 80.235 Section 80.235 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... reputable source, such as a professional publication or trade journal. The information submitted to EIA...
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
FEM electrode refinement for electrical impedance tomography.
Grychtol, Bartlomiej; Adler, Andy
2013-01-01
Electrical Impedance Tomography (EIT) reconstructs images of electrical tissue properties within a body from electrical transfer impedance measurements at surface electrodes. Reconstruction of EIT images requires the solution of an inverse problem in soft field tomography, where a sensitivity matrix, J, of the relationship between internal changes and measurements is calculated, and then a pseudo-inverse of J is used to update the image estimate. It is therefore clear that a precise calculation of J is required for solution accuracy. Since it is generally not possible to use analytic solutions, the finite element method (FEM) is typically used. It has generally been recommended in the EIT literature that FEMs be refined near electrodes, since the electric field and sensitivity is largest there. In this paper we analyze the accuracy requirement for FEM refinement near electrodes in EIT and describe a technique to refine arbitrary FEMs.
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
An interactive medical image segmentation framework using iterative refinement.
Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay
2017-02-13
Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images.
On-Orbit Model Refinement for Controller Redesign
NASA Technical Reports Server (NTRS)
Whorton, Mark S.; Calise, Anthony J.
1998-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are difficult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. A new procedure for refining a multivariable open loop plant model based on closed-loop response data is presented. Using a minimal representation of the state space dynamics, a least squares prediction error method is employed to estimate the plant parameters. This control-relevant system identification procedure stresses the joint nature of the system identification and control design problem by seeking to obtain a model that minimizes the difference between the predicted and actual closed-loop performance. This paper presents an algorithm for iterative closed-loop system identification and controller redesign along with illustrative examples.
A Cartesian grid approach with hierarchical refinement for compressible flows
NASA Technical Reports Server (NTRS)
Quirk, James J.
1994-01-01
Many numerical studies of flows that involve complex geometries are limited by the difficulties in generating suitable grids. We present a Cartesian boundary scheme for two-dimensional, compressible flows that is unfettered by the need to generate a computational grid and so it may be used, routinely, even for the most awkward of geometries. In essence, an arbitrary-shaped body is allowed to blank out some region of a background Cartesian mesh and the resultant cut-cells are singled out for special treatment. This is done within a finite-volume framework and so, in principle, any explicit flux-based integration scheme can take advantage of this method for enforcing solid boundary conditions. For best effect, the present Cartesian boundary scheme has been combined with a sophisticated, local mesh refinement scheme, and a number of examples are shown in order to demonstrate the efficacy of the combined algorithm for simulations of shock interaction phenomena.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2015-06-09
A system and method for reactively refining hydrocarbons, such as heavy oils with API gravities of less than 20 degrees and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure, using a selected fluid at supercritical conditions. A reaction portion of the system and method delivers lightweight, volatile hydrocarbons to an associated contacting unit which operates in mixed subcritical/supercritical or supercritical modes. Using thermal diffusion, multiphase contact, or a momentum generating pressure gradient, the contacting unit separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques.
Refinement in reanimation of the lower face.
Sherris, David A
2004-01-01
Both the temporalis muscle transfer and the static sling procedure are techniques that improve deglutition, speech, and aesthetics in patients who are afflicted with paralysis of the lower part of the face. A refinement that is applicable to either of these procedures is described. By bringing the perioral attachment of either the muscle or the static sling exactly to the midline of the upper and lower lips, the surgeon can make the patient's mouth more symmetrical. This simple refinement will improve the results obtained with either procedure and has not been associated with any increased perioperative risks or complications.
Parabolic Refined Invariants and Macdonald Polynomials
NASA Astrophysics Data System (ADS)
Chuang, Wu-yen; Diaconescu, Duiliu-Emanuel; Donagi, Ron; Pantev, Tony
2015-05-01
A string theoretic derivation is given for the conjecture of Hausel, Letellier and Rodriguez-Villegas on the cohomology of character varieties with marked points. Their formula is identified with a refined BPS expansion in the stable pair theory of a local root stack, generalizing previous work of the first two authors in collaboration with Pan. Haiman's geometric construction for Macdonald polynomials is shown to emerge naturally in this context via geometric engineering. In particular this yields a new conjectural relation between Macdonald polynomials and refined local orbifold curve counting invariants. The string theoretic approach also leads to a new spectral cover construction for parabolic Higgs bundles in terms of holomorphic symplectic orbifolds.
California refining: It's all or nothing, now
Not Available
1991-07-18
The State of California has a budget deficit of more than US $14-billion, stringent and costly environmental protection laws, and a giant fiercely competitive market for high-quality gasoline. This issue of Energy Detente examines some of the emerging consequences of this dramatic combination for petroleum refining. This issue also presents the following: (1) the ED Refining Netback Data Series for the US Gulf and West Coasts, Rotterdam, and Singapore as of July 12, 1991; and (2) the ED Fuel Price/Tax Series for countries of the Western Hemisphere, July 1991 edition. 8 figs., 6 tabs.
Structure prediction for CASP7 targets using extensive all-atom refinement with Rosetta@home.
Das, Rhiju; Qian, Bin; Raman, Srivatsan; Vernon, Robert; Thompson, James; Bradley, Philip; Khare, Sagar; Tyka, Michael D; Bhat, Divya; Chivian, Dylan; Kim, David E; Sheffler, William H; Malmström, Lars; Wollacott, Andrew M; Wang, Chu; Andre, Ingemar; Baker, David
2007-01-01
We describe predictions made using the Rosetta structure prediction methodology for both template-based modeling and free modeling categories in the Seventh Critical Assessment of Techniques for Protein Structure Prediction. For the first time, aggressive sampling and all-atom refinement could be carried out for the majority of targets, an advance enabled by the Rosetta@home distributed computing network. Template-based modeling predictions using an iterative refinement algorithm improved over the best existing templates for the majority of proteins with less than 200 residues. Free modeling methods gave near-atomic accuracy predictions for several targets under 100 residues from all secondary structure classes. These results indicate that refinement with an all-atom energy function, although computationally expensive, is a powerful method for obtaining accurate structure predictions.
Increasing levels of assistance in refinement of knowledge-based retrieval systems
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Kedar, Smadar; Pell, Barney
1994-01-01
The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.
Grain Refining and Microstructural Modification during Solidification.
1984-10-01
and 100 ml of distilled water (called etchant A) for 5 to 15 seconds. The others were etched with aqua regia (called etchant B) for 10 to 25 seconds... reverse lide It noceoav. aid IduntIty by block um-bet) Grain refining, microstructure, solidification, phase diagrams, electromagnetic stirring, Cu-Fe
Theory of a refined earth model
NASA Technical Reports Server (NTRS)
Krause, H. G. L.
1968-01-01
Refined equations are derived relating the variations of the earths gravity and radius as functions of longitude and latitude. They particularly relate the oblateness coefficients of the old harmonics and the difference of the polar radii /respectively, ellipticities and polar gravity accelerations/ in the Northern and Southern Hemispheres.
Refining the Eye: Dermatology and Visual Literacy
ERIC Educational Resources Information Center
Zimmermann, Corinne; Huang, Jennifer T.; Buzney, Elizabeth A.
2016-01-01
In 2014 the Museum of Fine Arts Boston and Harvard Medical School began a partnership focused on building visual literacy skills for dermatology residents in the Harvard Combined Dermatology Residency Program. "Refining the Eye: Art and Dermatology", a four session workshop, took place in the museum's galleries and utilized the Visual…
Refiners respond to strategic driving forces
Gonzalez, R.G.
1996-05-01
Better days should lie ahead for the international refining industry. While political unrest, lingering uncertainty regarding environmental policies, slowing world economic growth, over capacity and poor image will continue to plague the industry, margins in most areas appear to have bottomed out. Current margins, and even modestly improved margins, do not cover the cost of capital on certain equipment nor provide the returns necessary to achieve reinvestment economics. Refiners must determine how to improve the financial performance of their assets given this reality. Low margins and returns are generally characteristic of mature industries. Many of the business strategies employed by emerging businesses are no longer viable for refiners. The cost-cutting programs of the `90s have mainly been realized, leaving little to be gained from further reduction. Consequently, refiners will have to concentrate on increasing efficiency and delivering higher value products to survive. Rather than focusing solely on their competition, companies will emphasize substantial improvements in their own operations to achieve financial targets. This trend is clearly shown by the growing reliance on benchmarking services.
Energy Bandwidth for Petroleum Refining Processes
none,
2006-10-01
The petroleum refining energy bandwidth report analyzes the most energy-intensive unit operations used in U.S. refineries: crude oil distillation, fluid catalytic cracking, catalytic hydrotreating, catalytic reforming, and alkylation. The "bandwidth" provides a snapshot of the energy losses that can potentially be recovered through best practices and technology R&D.
Laser Vacuum Furnace for Zone Refining
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zurburg, F. W.; Penn, W. M.
1986-01-01
Laser beam scanned to produce moving melt zone. Experimental laser vacuum furnace scans crystalline wafer with high-power CO2-laser beam to generate precise melt zone with precise control of temperature gradients around zone. Intended for zone refining of silicon or other semiconductors in low gravity, apparatus used in normal gravity.
Robust Refinement as Implemented in TOPAS
Stone, K.; Stephens, P
2010-01-01
A robust refinement procedure is implemented in the program TOPAS through an iterative reweighting of the data. Examples are given of the procedure as applied to fitting partially overlapped peaks by full and partial models and also of the structures of ibuprofen and acetaminophen in the presence of unmodeled impurity contributions
Purification of Germanium Crystals by Zone Refining
NASA Astrophysics Data System (ADS)
Kooi, Kyler; Yang, Gang; Mei, Dongming
2016-09-01
Germanium zone refining is one of the most important techniques used to produce high purity germanium (HPGe) single crystals for the fabrication of nuclear radiation detectors. During zone refining the impurities are isolated to different parts of the ingot. In practice, the effective isolation of an impurity is dependent on many parameters, including molten zone travel speed, the ratio of ingot length to molten zone width, and number of passes. By studying the theory of these influential factors, perfecting our cleaning and preparation procedures, and analyzing the origin and distribution of our impurities (aluminum, boron, gallium, and phosphorous) identified using photothermal ionization spectroscopy (PTIS), we have optimized these parameters to produce HPGe. We have achieved a net impurity level of 1010 /cm3 for our zone-refined ingots, measured with van der Pauw and Hall-effect methods. Zone-refined ingots of this purity can be processed into a detector grade HPGe single crystal, which can be used to fabricate detectors for dark matter and neutrinoless double beta decay detection. This project was financially supported by DOE Grant (DE-FG02-10ER46709) and the State Governor's Research Center.
Solidification Based Grain Refinement in Steels
2009-07-24
Steelmaking. Vol. 33,pp. 292-300, 2005. 13. Alvarez, P.: Lesch. C: Bleck, W.; Petitgand. H.: Schottler. J.; Sevillano, J. Gil ., "Grain refinement...34 Metallurgical Transactions, vol. 1, pp 1987-1995 (1970). 7. Villars , P., Pauling File, 1995, http://crystdb.nims.go.jp/, (2 March, 2009). 8
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.
2007-04-29
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Refining aggregate exposure: example using parabens.
Cowan-Ellsberry, Christina E; Robison, Steven H
2009-12-01
The need to understand and estimate quantitatively the aggregate exposure to ingredients used broadly in a variety of product types continues to grow. Currently aggregate exposure is most commonly estimated by using a very simplistic approach of adding or summing the exposures from all the individual product types in which the chemical is used. However, the more broadly the ingredient is used in related consumer products, the more likely this summation will result in an unrealistic estimate of exposure because individuals in the population vary in their patterns of product use including co-use and non-use. Furthermore the ingredient may not be used in all products of a given type. An approach is described for refining this aggregate exposure using data on (1) co-use and non-use patterns of product use, (2) extent of products in which the ingredient is used and (3) dermal penetration and metabolism. This approach and the relative refinement in the aggregate exposure from incorporating these data is illustrated using methyl, n-propyl, n-butyl and ethyl parabens, the most widely used preservative system in personal care and cosmetic products. When these refining factors were used, the aggregate exposure compared to the simple addition approach was reduced by 51%, 58%, 90% and 92% for methyl, n-propyl, n-butyl and ethyl parabens, respectively. Since biomonitoring integrates all sources and routes of exposure, the estimates using this approach were compared to available paraben biomonitoring data. Comparison to the 95th percentile of these data showed that these refined estimates were still conservative by factors of 2-92. All of our refined estimates of aggregate exposure are less than the ADI of 10mg/kg/day for parabens.
Refining industry trends: Europe and surroundings
Guariguata, U.G.
1997-05-01
The European refining industry, along with its counterparts, is struggling with low profitability due to excess primary and conversion capacity, high operating costs and impending decisions of stringent environmental regulations that will require significant investments with hard to justify returns. This region was also faced in the early 1980s with excess capacity on the order of 4 MMb/d and satisfying the {open_quotes}at that point{close_quotes} demand by operating at very low utilization rates (60%). As was the case in the US, the rebalancing of the capacity led to the closure of some 51 refineries. Since the early 1990s, the increase in demand growth has essentially balanced the capacity threshold and utilization rates are settled around the 90% range. During the last two decades, the major oil companies have reduced their presence in the European refining sector, giving some state oil companies and producing countries the opportunity to gain access to the consumer market through the purchase of refining capacity in various countries-specifically, Kuwait in Italy; Libya and Venezuela in Germany; and Norway in other areas of Scandinavia. Although the market share for this new cast of characters remains small (4%) relative to participation by the majors (35%), their involvement in the European refining business set the foundation whereby US independent refiners relinquished control over assets that could not be operated profitably as part of a previous vertically integrated structure, unless access to the crude was ensured. The passage of time still seems to render this model valid.
Satellite SAR geocoding with refined RPC model
NASA Astrophysics Data System (ADS)
Zhang, Lu; Balz, Timo; Liao, Mingsheng
2012-04-01
Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.
Refining the In-Parameter-Order Strategy for Constructing Covering Arrays
Forbes, Michael; Lawrence, Jim; Lei, Yu; Kacker, Raghu N.; Kuhn, D. Richard
2008-01-01
Covering arrays are structures for well-representing extremely large input spaces and are used to efficiently implement blackbox testing for software and hardware. This paper proposes refinements over the In-Parameter-Order strategy (for arbitrary t). When constructing homogeneous-alphabet covering arrays, these refinements reduce runtime in nearly all cases by a factor of more than 5 and in some cases by factors as large as 280. This trend is increasing with the number of columns in the covering array. Moreover, the resulting covering arrays are about 5 % smaller. Consequently, this new algorithm has constructed many covering arrays that are the smallest in the literature. A heuristic variant of the algorithm sometimes produces comparably sized covering arrays while running significantly faster. PMID:27096128
Horne, Benjamin D; Lenzini, Petra A; Wadelius, Mia; Jorgensen, Andrea L; Kimmel, Stephen E; Ridker, Paul M; Eriksson, Niclas; Anderson, Jeffrey L; Pirmohamed, Munir; Limdi, Nita A; Pendleton, Robert C; McMillin, Gwendolyn A; Burmester, James K; Kurnik, Daniel; Stein, C Michael; Caldwell, Michael D; Eby, Charles S; Rane, Anders; Lindh, Jonatan D; Shin, Jae-Gook; Kim, Ho-Sook; Angchaisuksiri, Pantep; Glynn, Robert J; Kronquist, Kathryn E; Carlquist, John F; Grice, Gloria R; Barrack, Robert L; Li, Juan; Gage, Brian F
2012-02-01
By guiding initial warfarin dose, pharmacogenetic (PGx) algorithms may improve the safety of warfarin initiation. However, once international normalised ratio (INR) response is known, the contribution of PGx to dose refinements is uncertain. This study sought to develop and validate clinical and PGx dosing algorithms for warfarin dose refinement on days 6-11 after therapy initiation. An international sample of 2,022 patients at 13 medical centres on three continents provided clinical, INR, and genetic data at treatment days 6-11 to predict therapeutic warfarin dose. Independent derivation and retrospective validation samples were composed by randomly dividing the population (80%/20%). Prior warfarin doses were weighted by their expected effect on S-warfarin concentrations using an exponential-decay pharmacokinetic model. The INR divided by that "effective" dose constituted a treatment response index . Treatment response index, age, amiodarone, body surface area, warfarin indication, and target INR were associated with dose in the derivation sample. A clinical algorithm based on these factors was remarkably accurate: in the retrospective validation cohort its R(2) was 61.2% and median absolute error (MAE) was 5.0 mg/week. Accuracy and safety was confirmed in a prospective cohort (N=43). CYP2C9 variants and VKORC1-1639 G→A were significant dose predictors in both the derivation and validation samples. In the retrospective validation cohort, the PGx algorithm had: R(2)= 69.1% (p<0.05 vs. clinical algorithm), MAE= 4.7 mg/week. In conclusion, a pharmacogenetic warfarin dose-refinement algorithm based on clinical, INR, and genetic factors can explain at least 69.1% of therapeutic warfarin dose variability after about one week of therapy.
Three Dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
NASA Astrophysics Data System (ADS)
Wijesinghe, Sanith; Hornung, Richard; Garcia, Alejandro; Hadjiconstantinou, Nicolas
2002-11-01
An adaptive mesh and algorithmic refinement (AMAR) scheme to model multi-scale, compressible continuum-atomistic hydrodynamics is presented. The AMAR technique applies the atomistic description as the finest level of refinement in regions where the continuum description is expected to fail, such as in regions of high flow gradients and discontinous material interfaces. In the current implementation the atomistic description is provided by the direct simulation Monte Carlo (DSMC). The continuum flow is modeled using the compressible flow Euler equations and is solved using a second order Godunov scheme. Coupling is achieved by conservation of fluxes across the continuum-atomistic grid boundaries. The AMAR data structures are supported by a C++ object oriented framework (Structured Adaptive Mesh Refinement Application Infrastructure - SAMRAI) which allows for efficient parallel implementation. Current work is focused on extending AMAR to simulations of gas mixtures. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48.
On the Factor Refinement Principle and its Implementation on Multicore Architectures
NASA Astrophysics Data System (ADS)
Mohsin Ali, Md; Moreno Maza, Marc; Xie, Yuzhen
2012-10-01
We propose a divide and conquer adaptation of the factor refinement algorithm of Bach, Driscoll and Shallit. For an ideal cache of Z words, with L words per block, the original approach suffers from O(n2/L) cache misses, meanwhile our adaptation incurs O(n2/ZL) cache misses only. We have realized a multithreaded implementation of the latter using Cilk++ targeting multicores. Our code achieves linear speedup on 16 cores for sufficiently large input data.
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Crystallization in lactose refining-a review.
Wong, Shin Yee; Hartel, Richard W
2014-03-01
In the dairy industry, crystallization is an important separation process used in the refining of lactose from whey solutions. In the refining operation, lactose crystals are separated from the whey solution through nucleation, growth, and/or aggregation. The rate of crystallization is determined by the combined effect of crystallizer design, processing parameters, and impurities on the kinetics of the process. This review summarizes studies on lactose crystallization, including the mechanism, theory of crystallization, and the impact of various factors affecting the crystallization kinetics. In addition, an overview of the industrial crystallization operation highlights the problems faced by the lactose manufacturer. The approaches that are beneficial to the lactose manufacturer for process optimization or improvement are summarized in this review. Over the years, much knowledge has been acquired through extensive research. However, the industrial crystallization process is still far from optimized. Therefore, future effort should focus on transferring the new knowledge and technology to the dairy industry.
Dinosaurs can fly -- High performance refining
Treat, J.E.
1995-09-01
High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.
The Refinement of Multi-Agent Systems
NASA Astrophysics Data System (ADS)
Aştefănoaei, L.; de Boer, F. S.
This chapter introduces an encompassing theory of refinement which supports a top-down methodology for designing multi-agent systems. We present a general modelling framework where we identify different abstraction levels of BDI agents. On the one hand, at a higher level of abstraction we introduce the language BUnity as a way to specify “what” an agent can do. On the other hand, at a more concrete layer we introduce the language BUpL as implementing not only what an agent can do but also “when” the agent can do. At this stage of individual agent design, refinement is understood as trace inclusion. Having the traces of an implementation included in the traces of a given specification means that the implementation is correct with respect to the specification.
Automata Learning with Automated Alphabet Abstraction Refinement
NASA Astrophysics Data System (ADS)
Howar, Falk; Steffen, Bernhard; Merten, Maik
on is the key when learning behavioral models of realistic systems, but also the cause of a major problem: the introduction of non-determinism. In this paper, we introduce a method for refining a given abstraction to automatically regain a deterministic behavior on-the-fly during the learning process. Thus the control over abstraction becomes part of the learning process, with the effect that detected non-determinism does not lead to failure, but to a dynamic alphabet abstraction refinement. Like automata learning itself, this method in general is neither sound nor complete, but it also enjoys similar convergence properties even for infinite systems as long as the concrete system itself behaves deterministically, as illustrated along a concrete example.
Using supercritical fluids to refine hydrocarbons
Yarbro, Stephen Lee
2014-11-25
This is a method to reactively refine hydrocarbons, such as heavy oils with API gravities of less than 20.degree. and bitumen-like hydrocarbons with viscosities greater than 1000 cp at standard temperature and pressure using a selected fluid at supercritical conditions. The reaction portion of the method delivers lighter weight, more volatile hydrocarbons to an attached contacting device that operates in mixed subcritical or supercritical modes. This separates the reaction products into portions that are viable for use or sale without further conventional refining and hydro-processing techniques. This method produces valuable products with fewer processing steps, lower costs, increased worker safety due to less processing and handling, allow greater opportunity for new oil field development and subsequent positive economic impact, reduce related carbon dioxide, and wastes typical with conventional refineries.
The indirect electrochemical refining of lunar ores
NASA Technical Reports Server (NTRS)
Semkow, Krystyna W.; Sammells, Anthony F.
1987-01-01
Recent work performed on an electrolytic cell is reported which addresses the implicit limitations in various approaches to refining lunar ores. The cell uses an oxygen vacancy conducting stabilized zirconia solid electrolyte to effect separation between a molten salt catholyte compartment where alkali metals are deposited, and an oxygen-evolving anode of composition La(0.89)Sr(0.1)MnO3. The cell configuration is shown and discussed along with a polarization curve and a steady-state current-voltage curve. In a practical cell, cathodically deposited liquid lithium would be continuously removed from the electrolytic cell and used as a valuable reducing agent for ore refining under lunar conditions. Oxygen would be indirectly electrochemically extracted from lunar ores for breathing purposes.
Substance abuse in the refining industry
Little, A. Jr. ); Ross, J.K. ); Lavorerio, R. ); Richards, T.A. )
1989-01-01
In order to provide some background for the NPRA Annual Meeting Management Session panel discussion on Substance Abuse in the Refining and Petrochemical Industries, NPRA distributed a questionnaire to member companies requesting information regarding the status of their individual substance abuse policies. The questionnaire was designed to identify general trends in the industry. The aggregate responses to the survey are summarized in this paper, as background for the Substance Abuse panel discussions.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Grain Refinement of Permanent Mold Cast Copper Base Alloys
M.Sadayappan; J.P.Thomson; M.Elboujdaini; G.Ping Gu; M. Sahoo
2005-04-01
Grain refinement is a well established process for many cast and wrought alloys. The mechanical properties of various alloys could be enhanced by reducing the grain size. Refinement is also known to improve casting characteristics such as fluidity and hot tearing. Grain refinement of copper-base alloys is not widely used, especially in sand casting process. However, in permanent mold casting of copper alloys it is now common to use grain refinement to counteract the problem of severe hot tearing which also improves the pressure tightness of plumbing components. The mechanism of grain refinement in copper-base alloys is not well understood. The issues to be studied include the effect of minor alloy additions on the microstructure, their interaction with the grain refiner, effect of cooling rate, and loss of grain refinement (fading). In this investigation, efforts were made to explore and understand grain refinement of copper alloys, especially in permanent mold casting conditions.
Cartesian-cell based grid generation and adaptive mesh refinement
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1993-01-01
Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.
NASA Astrophysics Data System (ADS)
Henshaw, William D.; Schwendeman, Donald W.
2008-08-01
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the accuracy of the numerical solutions is assessed quantitatively through an estimation of the errors from a grid convergence study. The parallel performance of the
Refining a triangulation of a planar straight-line graph to eliminate large angles
Mitchell, S.A.
1993-05-13
Triangulations without large angles have a number of applications in numerical analysis and computer graphics. In particular, the convergence of a finite element calculation depends on the largest angle of the triangulation. Also, the running time of a finite element calculation is dependent on the triangulation size, so having a triangulation with few Steiner points is also important. Bern, Dobkin and Eppstein pose as an open problem the existence of an algorithm to triangulate a planar straight-line graph (PSLG) without large angles using a polynomial number of Steiner points. We solve this problem by showing that any PSLG with {upsilon} vertices can be triangulated with no angle larger than 7{pi}/8 by adding O({upsilon}{sup 2}log {upsilon}) Steiner points in O({upsilon}{sup 2} log{sup 2} {upsilon}) time. We first triangulate the PSLG with an arbitrary constrained triangulation and then refine that triangulation by adding additional vertices and edges. Some PSLGs require {Omega}({upsilon}{sup 2}) Steiner points in any triangulation achieving any largest angle bound less than {pi}. Hence the number of Steiner points added by our algorithm is within a log {upsilon} factor of worst case optimal. We note that our refinement algorithm works on arbitrary triangulations: Given any triangulation, we show how to refine it so that no angle is larger than 7{pi}/8. Our construction adds O(nm+nplog m) vertices and runs in time O(nm+nplog m) log(m+ p)), where n is the number of edges, m is one plus the number of obtuse angles, and p is one plus the number of holes and interior vertices in the original triangulation. A previously considered problem is refining a constrained triangulation of a simple polygon, where p = 1. For this problem we add O({upsilon}{sup 2}) Steiner points, which is within a constant factor of worst case optimal.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Coloured Petri Net Refinement Specification and Correctness Proof with Coq
NASA Technical Reports Server (NTRS)
Choppy, Christine; Mayero, Micaela; Petrucci, Laure
2009-01-01
In this work, we address the formalisation of symmetric nets, a subclass of coloured Petri nets, refinement in COQ. We first provide a formalisation of the net models, and of their type refinement in COQ. Then the COQ proof assistant is used to prove the refinement correctness lemma. An example adapted from a protocol example illustrates our work.
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 3 2013-10-01 2013-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 3 2012-10-01 2012-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
48 CFR 208.7304 - Refined precious metals.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 3 2014-10-01 2014-10-01 false Refined precious metals... Government-Owned Precious Metals 208.7304 Refined precious metals. See PGI 208.7304 for a list of refined precious metals managed by DSCP....
Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement
2015-01-07
from data bases which are derived from quantum physics and transmit across the two different coordinates by a nearest neighbor search algorithm [7,8...complex radiative simulation must be built on the interaction of quantum physics, chemical kinetics, aerodynamics, and radiation transfer. The most...created a space partition algorithm for the nearest neighbor search optimization on both structured/unstructured grids and integrated with the high
Refining the asteroid taxonomy by polarimetric observations
NASA Astrophysics Data System (ADS)
Belskaya, I. N.; Fornasier, S.; Tozzi, G. P.; Gil-Hutton, R.; Cellino, A.; Antonyuk, K.; Krugly, Yu. N.; Dovgopol, A. N.; Faggi, S.
2017-03-01
We present new results of polarimetric observations of 15 main belt asteroids of different composition. By merging new and published data we determined polarimetric parameters characterizing individual asteroids and mean values of the same parameters characterizing different taxonomic classes. The majority of asteroids show polarimetric phase curves close to the average curve of the corresponding class. We show that using polarimetric data it is possible to refine asteroid taxonomy and derive a polarimetric classification for 283 main belt asteroids. Polarimetric observations of asteroid (21) Lutetia are found to exhibit possible variations of the position angle of the polarization plane over the surface.
Acoustic Logging Modeling by Refined Biot's Equations
NASA Astrophysics Data System (ADS)
Plyushchenkov, Boris D.; Turchaninov, Victor I.
An explicit uniform completely conservative finite difference scheme for the refined Biot's equations is proposed. This system is modified according to the modern theory of dynamic permeability and tortuosity in a fluid-saturated elastic porous media. The approximate local boundary transparency conditions are constructed. The acoustic logging device is simulated by the choice of appropriate boundary conditions on its external surface. This scheme and these conditions are satisfactory for exploring borehole acoustic problems in permeable formations in a real axial-symmetrical situation. The developed approach can be adapted for a nonsymmetric case also.
Formal language theory: refining the Chomsky hierarchy.
Jäger, Gerhard; Rogers, James
2012-07-19
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages).
Adaptive refinement tools for tetrahedral unstructured grids
NASA Technical Reports Server (NTRS)
Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)
2011-01-01
An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.
WASP-41b: Refined Physical Properties
NASA Astrophysics Data System (ADS)
Vaňko, M.; Pribulla, T.; Tan, T. G.; Parimucha, Š.; Evans, P.; Mašek, M.
2015-07-01
We present the first follow-up study of the transiting system WASP-41 after its discovery in 2011. Our main goal is to refine the physical parameters of the system and to search for possible signs of transit timing variations. The observations used for the analysis were taken from the public archive Exoplanet Transit Database (ETD). The Safronov number and equilibrium temperature of WASP-41b indicate that it belongs to the so-called Class I. No transit timing variations (TTV) were detected.
AMAR: A Computational Model of Autosegmental Phonology
1993-10-01
the 8th International Joint Conference on Artificial Inteligence . 683-5. Koskenniemi, K. 1984. A general computational model for word-form recognition...NUMBER Massachusetts Institute of Technology Artificial Intelligence Laboratory AI-TR 1450 545 Technology Square Cambridge, Massachusetts 02139 9...reader a feel for the workinigs of ANIAR. this chapter will begini withi a very sininpb examl- ple based oni ani artificial tonie laniguage with oiony t
Refinement Of Hexahedral Cells In Euler Flow Computations
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1996-01-01
Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Towards adaptive kinetic-fluid simulations of weakly ionized plasmas
NASA Astrophysics Data System (ADS)
Kolobov, V. I.; Arslanbekov, R. R.
2012-02-01
This paper describes an Adaptive Mesh and Algorithm Refinement (AMAR) methodology for multi-scale simulations of gas flows and the challenges associated with extending this methodology for simulations of weakly ionized plasmas. The AMAR method combines Adaptive Mesh Refinement (AMR) with automatic selection of kinetic or continuum solvers in different parts of computational domains. We first review the discrete velocity method for solving Boltzmann and Wang Chang-Uhlenbeck kinetic equations for rarefied gases. Then, peculiarities of AMR implementation with octree Cartesian mesh are discussed. A Unified Flow Solver (UFS) uses AMAR method with adaptive Cartesian mesh to dynamically introduce kinetic patches for multi-scale simulations of gas flows. We describe fluid plasma models with AMR capabilities and illustrate how physical models affect simulation results for gas discharges, especially in the areas where electron kinetics plays an important role. We introduce Eulerian solvers for plasma kinetic equations and illustrate the concept of adaptive mesh in velocity space. Specifics of electron kinetics in collisional plasmas are described focusing on deterministic methods of solving kinetic equations for electrons under different conditions. We illustrate the appearance of distinct groups of electrons in the cathode region of DC discharges and discuss the physical models appropriate for each group. These kinetic models are currently being incorporated into AMAR methodology for multi-scale plasma simulations.
Increased delignification by white rot fungi after pressure refining Miscanthus.
Baker, Paul W; Charlton, Adam; Hale, Mike D C
2015-01-01
Pressure refining, a pulp making process to separate fibres of lignocellulosic materials, deposits lignin granules on the surface of the fibres that could enable increased access to lignin degrading enzymes. Three different white rot fungi were grown on pressure refined (at 6 bar and 8 bar) and milled Miscanthus. Growth after 28 days showed highest biomass losses on milled Miscanthus compared to pressure refined Miscanthus. Ceriporiopsis subvermispora caused a significantly higher proportion of lignin removal when grown on 6 bar pressure refined Miscanthus compared to growth on 8 bar pressure refined Miscanthus and milled Miscanthus. RM22b followed a similar trend but Phlebiopsis gigantea SPLog6 did not. Conversely, C. subvermispora growing on pressure refined Miscanthus revealed that the proportion of cellulose increased. These results show that two of the three white rot fungi used in this study showed higher delignification on pressure refined Miscanthus than milled Miscanthus.
Rapid Glass Refiner Development Program, Final report
1995-02-20
A rapid glass refiner (RGR) technology which could be applied to both conventional and advanced class melting systems would significantly enhance the productivity and the competitiveness of the glass industry in the United States. Therefore, Vortec Corporation, with the support of the US Department of Energy (US DOE) under Cooperative Agreement No. DE-FC07-90ID12911, conducted a research and development program for a unique and innovative approach to rapid glass refining. To provide focus for this research effort, container glass was the primary target from among the principal glass types based on its market size and potential for significant energy savings. Container glass products represent the largest segment of the total glass industry accounting for 60% of the tonnage produced and over 40% of the annual energy consumption of 232 trillion Btu/yr. Projections of energy consumption and the market penetration of advanced melting and fining into the container glass industry yield a potential energy savings of 7.9 trillion Btu/yr by the year 2020.
Adaptive Mesh Refinement for ICF Calculations
NASA Astrophysics Data System (ADS)
Fyfe, David
2005-10-01
This paper describes our use of the package PARAMESH to create an Adaptive Mesh Refinement (AMR) version of NRL's FASTRAD3D code. PARAMESH was designed to create an MPI-based AMR code from a block structured serial code such as FASTRAD3D. FASTRAD3D is a compressible hydrodynamics code containing the physical effects relevant for the simulation of high-temperature plasmas including inertial confinement fusion (ICF) Rayleigh-Taylor unstable direct drive laser targets. These effects include inverse bremmstrahlung laser energy absorption, classical flux-limited Spitzer thermal conduction, real (table look-up) equation-of-state with either separate or identical electron and ion temperatures, multi-group variable Eddington radiation transport, and multi-group alpha particle transport and thermonuclear burn. Numerically, this physics requires an elliptic solver and a ray tracing approach on the AMR grid, which is the main subject of this paper. A sample ICF calculation will be presented. MacNeice et al., ``PARAMESH: A parallel adaptive mesh refinement community tool,'' Computer Physics Communications, 126 (2000), pp. 330-354.
Molecular refinement of gibbon genome rearrangements.
Roberto, Roberta; Capozzi, Oronzo; Wilson, Richard K; Mardis, Elaine R; Lomiento, Mariana; Tuzun, Eray; Cheng, Ze; Mootnick, Alan R; Archidiacono, Nicoletta; Rocchi, Mariano; Eichler, Evan E
2007-02-01
The gibbon karyotype is known to be extensively rearranged when compared to the human and to the ancestral primate karyotype. By combining a bioinformatics (paired-end sequence analysis) approach and a molecular cytogenetics approach, we have refined the synteny block arrangement of the white-cheeked gibbon (Nomascus leucogenys, NLE) with respect to the human genome. We provide the first detailed clone framework map of the gibbon genome and refine the location of 86 evolutionary breakpoints to <1 Mb resolution. An additional 12 breakpoints, mapping primarily to centromeric and telomeric regions, were mapped to approximately 5 Mb resolution. Our combined FISH and BES analysis indicates that we have effectively subcloned 49 of these breakpoints within NLE gibbon BAC clones, mapped to a median resolution of 79.7 kb. Interestingly, many of the intervals associated with translocations were gene-rich, including some genes associated with normal skeletal development. Comparisons of NLE breakpoints with those of other gibbon species reveal variability in the position, suggesting that chromosomal rearrangement has been a longstanding property of this particular ape lineage. Our data emphasize the synergistic effect of combining computational genomics and cytogenetics and provide a framework for ultimate sequence and assembly of the gibbon genome.
Current sheets, reconnection and adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Marliani, Christiane
1998-11-01
Adaptive structured mesh refinement methods have proved to be an appropriate tool for the numerical study of a variety of problems where largely separated length scales are involved, e.g. [R. Grauer, C. Marliani, K. Germaschewski, PRL, 80, 4177, (1998)]. A typical example in plasma physics are the current sheets in magnetohydrodynamic flows. Their dynamics is investigated in the framework of incompressible MHD. We present simulations of the ideal and inviscid dynamics in two and three dimensions. In addition, we show numerical simulations for the resistive case in two dimensions. Specifically, we show simulations for the case of the doubly periodic coalescence instability. At the onset of the reconnection process the kinetic energy rises and drops rapidly and afterwards settles into an oscillatory phase. The timescale of the magnetic reconnection process is not affected by these fast events but consistent with the Sweet-Parker model of stationary reconnection. Taking into account the electron inertia terms in the generalized Ohm's law the electron skin depth is introduced as an additional parameter. The modified equations allow for magnetic reconnection in the collisionless regime. Current density and vorticity concentrate in extremely long and thin sheets. Their dynamics becomes numerically accessible by means of adaptive mesh refinement.
h-Refinement for simple corner balance scheme of SN transport equation on distorted meshes
NASA Astrophysics Data System (ADS)
Yang, Rong; Yuan, Guangwei
2016-11-01
The transport sweep algorithm is a common method for solving discrete ordinate transport equation, but it breaks down once a concave cell appears in spatial meshes. To deal with this issue a local h-refinement for simple corner balance (SCB) scheme of SN transport equation on arbitrary quadrilateral meshes is presented in this paper by using a new subcell partition. It follows that a hybrid mesh with both triangle and quadrilateral cells is generated, and the geometric quality of these cells improves, especially it is ensured that all cells become convex. Combining with the original SCB scheme, an adaptive transfer algorithm based on the hybrid mesh is constructed. Numerical experiments are presented to verify the utility and accuracy of the new algorithm, especially for some application problems such as radiation transport coupled with Lagrangian hydrodynamic flow. The results show that it performs well on extremely distorted meshes with concave cells, on which the original SCB scheme does not work.
Parallel adaptive mesh refinement techniques for plasticity problems
Barry, W.J.; Jones, M.T. |; Plassmann, P.E.
1997-12-31
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
Parallel adaptive mesh refinement techniques for plasticity problems
NASA Technical Reports Server (NTRS)
Barry, W. J.; Jones, M. T.; Plassmann, P. E.
1997-01-01
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
Refinement of predictive reaeration equations for a typical Indian river
NASA Astrophysics Data System (ADS)
Jha, Ramakar; Ojha, C. S. P.; Bhatia, K. K. S.
2001-04-01
Dissolved oxygen mass balance has been computed for different reaches of River Kali in western Uttar Pradesh (India) to obtain the reaeration coefficient (K2). A total of 270 field data sets have been collected during the period from March 1999 to February 2000. Eleven most popular predictive equations, used for reaeration prediction and utilizing mean stream velocity, bed slope, flow depth, friction velocity and Froude number, have been tested for their applicability in the River Kali using data generated during field survey. The K2 values computed from these predictive equations have been compared with the K2 values observed from dissolved oxygen balance measurements in the field. The performance of predictive equations have been evaluated using error estimation, namely standard error (SE), normal mean error (NME), mean multiplicative error (MME) and correlation statistics. The equations developed by Smoot and by Cadwallader and McDonnell showed comparatively better results. Moreover, a refined predictive equation has been developed using a least-squares algorithm for the River Kali that minimizes error estimates and improves correlation between observed and computed reaeration coefficients.
Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement
Anninos, P; Fragile, P C; Salmonson, J D
2005-05-06
A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.
Refinement of ground reference data with segmented image data
NASA Technical Reports Server (NTRS)
Robinson, Jon W.; Tilton, James C.
1991-01-01
One of the ways to determine ground reference data (GRD) for satellite remote sensing data is to photo-interpret low altitude aerial photographs and then digitize the cover types on a digitized tablet and register them to 7.5 minute U.S.G.S. maps (that were themselves digitized). The resulting GRD can be registered to the satellite image or, vice versa. Unfortunately, there are many opportunities for error when using digitizing tablet and the resolution of the edges for the GRD depends on the spacing of the points selected on the digitizing tablet. One of the consequences of this is that when overlaid on the image, errors and missed detail in the GRD become evident. An approach is discussed for correcting these errors and adding detail to the GRD through the use of a highly interactive, visually oriented process. This process involves the use of overlaid visual displays of the satellite image data, the GRD, and a segmentation of the satellite image data. Several prototype programs were implemented which provide means of taking a segmented image and using the edges from the reference data to mask out these segment edges that are beyond a certain distance from the reference data edges. Then using the reference data edges as a guide, those segment edges that remain and that are judged not to be image versions of the reference edges are manually marked and removed. The prototype programs that were developed and the algorithmic refinements that facilitate execution of this task are described.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
NASA Astrophysics Data System (ADS)
Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand
2016-12-01
A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.
NASA Astrophysics Data System (ADS)
Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand
2017-04-01
A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.
Hornung, R.D.
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
A MEDLINE categorization algorithm
Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit
2006-01-01
Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms
Proving refinement transformations for deriving high-assurance software
Winter, V.L.; Boyle, J.M.
1996-05-01
The construction of a high-assurance system requires some evidence, ideally a proof, that the system as implemented will behave as required. Direct proofs of implementations do not scale up well as systems become more complex and therefore are of limited value. In recent years, refinement-based approaches have been investigated as a means to manage the complexity inherent in the verification process. In a refinement-based approach, a high-level specification is converted into an implementation through a number of refinement steps. The hope is that the proofs of the individual refinement steps will be easier than a direct proof of the implementation. However, if stepwise refinement is performed manually, the number of steps is severely limited, implying that the size of each step is large. If refinement steps are large, then proofs of their correctness will not be much easier than a direct proof of the implementation. The authors describe an approach to refinement-based software development that is based on automatic application of refinements, expressed as program transformations. This automation has the desirable effect that the refinement steps can be extremely small and, thus, easy to prove correct. They give an overview of the TAMPR transformation system that the use for automated refinement. They then focus on some aspects of the semantic framework that they have been developing to enable proofs that TAMPR transformations are correctness preserving. With this framework, proofs of correctness for transformations can be obtained with the assistance of an automated reasoning system.
Level 5: user refinement to aid the fusion process
NASA Astrophysics Data System (ADS)
Blasch, Erik P.; Plano, Susan
2003-04-01
The revised JDL Fusion model Level 4 process refinement covers a broad spectrum of actions such as sensor management and control. A limitation of Level 4 is the
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
The evolution and refinements of varicocele surgery
Marmar, Joel L
2016-01-01
Varicoceles had been recognized in clinical practice for over a century. Originally, these procedures were utilized for the management of pain but, since 1952, the repairs had been mostly for the treatment of male infertility. However, the diagnosis and treatment of varicoceles were controversial, because the pathophysiology was not clear, the entry criteria of the studies varied among centers, and there were few randomized clinical trials. Nevertheless, clinicians continued developing techniques for the correction of varicoceles, basic scientists continued investigations on the pathophysiology of varicoceles, and new outcome data from prospective randomized trials have appeared in the world's literature. Therefore, this special edition of the Asian Journal of Andrology was proposed to report much of the new information related to varicoceles and, as a specific part of this project, the present article was developed as a comprehensive review of the evolution and refinements of the corrective procedures. PMID:26732111
Seasat orbit refinement for altimetry application
NASA Astrophysics Data System (ADS)
Mohan, S. N.; Hamata, N. E.; Stavert, R. L.; Bierman, G. J.
1980-12-01
This paper describes the use of stochastic differential correction models in refining the Seasat orbit based on post-flight analysis of tracking data. The objective is to obtain orbital-height precision that is commensurate with the inherent Seasat altimetry data precision level of 10 cms. Local corrections to a mean ballistic arc, perturbed principally by atmospheric drag variations and local gravitational anomalies, are obtained by the introduction of stochastic dynamical models in conjunction with optimal estimation/smoothing techniques. Assessment of the resulting orbit with 'ground truth' provided by Seasat altimetry data shows that the orbital height precision is improved by 32% when compared to a conventional least-squares solution using the same data set. The orbital height precision realized by employing stochastic differential correction models is in the range of 73 cms to 208 cms rms.
Formal language theory: refining the Chomsky hierarchy
Jäger, Gerhard; Rogers, James
2012-01-01
The first part of this article gives a brief overview of the four levels of the Chomsky hierarchy, with a special emphasis on context-free and regular languages. It then recapitulates the arguments why neither regular nor context-free grammar is sufficiently expressive to capture all phenomena in the natural language syntax. In the second part, two refinements of the Chomsky hierarchy are reviewed, which are both relevant to the extant research in cognitive science: the mildly context-sensitive languages (which are located between context-free and context-sensitive languages), and the sub-regular hierarchy (which distinguishes several levels of complexity within the class of regular languages). PMID:22688632
GRChombo: Numerical relativity with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Visualization of Scalar Adaptive Mesh Refinement Data
VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-12-06
Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
Technical Considerations for Filler and Neuromodulator Refinements
Wilson, Anthony J.; Chang, Brian L.; Percec, Ivona
2016-01-01
Background: The toolbox for cosmetic practitioners is growing at an unprecedented rate. There are novel products every year and expanding off-label indications for neurotoxin and soft-tissue filler applications. Consequently, aesthetic physicians are increasingly challenged by the task of selecting the most appropriate products and techniques to achieve optimal patient outcomes. Methods: We employed a PubMed literature search of facial injectables from the past 10 years (2005–2015), with emphasis on those articles embracing evidence-based medicine. We evaluated the scientific background of every product and the physicochemical properties that make each one ideal for specific indications. The 2 senior authors provide commentary regarding their clinical experience with specific technical refinements of neuromodulators and soft-tissue fillers. Results: Neurotoxins and fillers are characterized by unique physical characteristics that distinguish each product. This results in subtle but important differences in their clinical applications. Specific indications and recommendations for the use of the various neurotoxins and soft-tissue fillers are reviewed. The discussion highlights refinements in combination treatments and product physical modifications, according to specific treatment zones. Conclusions: The field of facial aesthetics has evolved dramatically, mostly secondary to our increased understanding of 3-dimensional structural volume restoration. Our work reviews Food and Drug Administration–approved injectables. In addition, we describe how to modify products to fulfill specific indications such as treatment of the mid face, décolletage, hands, and periorbital regions. Although we cannot directly evaluate the duration or exact physical properties of blended products, we argue that “product customization” is safe and provides natural results with excellent patient outcomes. PMID:28018778
Essays on refining markets and environmental policy
NASA Astrophysics Data System (ADS)
Oladunjoye, Olusegun Akintunde
This thesis is comprised of three essays. The first two essays examine empirically the relationship between crude oil price and wholesale gasoline prices in the U.S. petroleum refining industry while the third essay determines the optimal combination of emissions tax and environmental research and development (ER&D) subsidy when firms organize ER&D either competitively or as a research joint venture (RJV). In the first essay, we estimate an error correction model to determine the effects of market structure on the speed of adjustment of wholesale gasoline prices, to crude oil price changes. The results indicate that market structure does not have a strong effect on the dynamics of price adjustment in the three regional markets examined. In the second essay, we allow for inventories to affect the relationship between crude oil and wholesale gasoline prices by allowing them to affect the probability of regime change in a Markov-switching model of the refining margin. We find that low gasoline inventory increases the probability of switching from the low margin regime to the high margin regime and also increases the probability of staying in the high margin regime. This is consistent with the predictions of the competitive storage theory. In the third essay, we extend the Industrial Organization R&D theory to the determination of optimal environmental policies. We find that RJV is socially desirable. In comparison to competitive ER&D, we suggest that regulators should encourage RJV with a lower emissions tax and higher subsidy as these will lead to the coordination of ER&D activities and eliminate duplication of efforts while firms internalize their technological spillover externality.
AMRSim: an object-oriented performance simulator for parallel adaptive mesh refinement
Miller, B; Philip, B; Quinlan, D; Wissink, A
2001-01-08
Adaptive mesh refinement is complicated by both the algorithms and the dynamic nature of the computations. In parallel the complexity of getting good performance is dependent upon the architecture and the application. Most attempts to address the complexity of AMR have lead to the development of library solutions, most have developed object-oriented libraries or frameworks. All attempts to date have made numerous and sometimes conflicting assumptions which make the evaluation of performance of AMR across different applications and architectures difficult or impracticable. The evaluation of different approaches can alternatively be accomplished through simulation of the different AMR processes. In this paper we outline our research work to simulate the processing of adaptive mesh refinement grids using a distributed array class library (P++). This paper presents a combined analytic and empirical approach, since details of the algorithms can be readily predicted (separated into specific phases), while the performance associated with the dynamic behavior must be studied empirically. The result, AMRSim, provides a simple way to develop bounds on the expected performance of AMR calculations subject to constraints given by the algorithms, frameworks, and architecture.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality with results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Parallel adaptive mesh refinement for electronic structure calculations
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1996-12-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
Improved ligand geometries in crystallographic refinement using AFITT in PHENIX
Janowski, Pawel A.; Moriarty, Nigel W.; Kelley, Brian P.; Case, David A.; York, Darrin M.; Adams, Paul D.; Warren, Gregory L.
2016-01-01
Modern crystal structure refinement programs rely on geometry restraints to overcome the challenge of a low data-to-parameter ratio. While the classical Engh and Huber restraints work well for standard amino-acid residues, the chemical complexity of small-molecule ligands presents a particular challenge. Most current approaches either limit ligand restraints to those that can be readily described in the Crystallographic Information File (CIF) format, thus sacrificing chemical flexibility and energetic accuracy, or they employ protocols that substantially lengthen the refinement time, potentially hindering rapid automated refinement workflows. PHENIX–AFITT refinement uses a full molecular-mechanics force field for user-selected small-molecule ligands during refinement, eliminating the potentially difficult problem of finding or generating high-quality geometry restraints. It is fully integrated with a standard refinement protocol and requires practically no additional steps from the user, making it ideal for high-throughput workflows. PHENIX–AFITT refinements also handle multiple ligands in a single model, alternate conformations and covalently bound ligands. Here, the results of combining AFITT and the PHENIX software suite on a data set of 189 protein–ligand PDB structures are presented. Refinements using PHENIX–AFITT significantly reduce ligand conformational energy and lead to improved geometries without detriment to the fit to the experimental data. For the data presented, PHENIX–AFITT refinements result in more chemically accurate models for small-molecule ligands. PMID:27599738
New Process for Grain Refinement of Aluminum. Final Report
Dr. Joseph A. Megy
2000-09-22
A new method of grain refining aluminum involving in-situ formation of boride nuclei in molten aluminum just prior to casting has been developed in the subject DOE program over the last thirty months by a team consisting of JDC, Inc., Alcoa Technical Center, GRAS, Inc., Touchstone Labs, and GKS Engineering Services. The Manufacturing process to make boron trichloride for grain refining is much simpler than preparing conventional grain refiners, with attendant environmental, capital, and energy savings. The manufacture of boride grain refining nuclei using the fy-Gem process avoids clusters, salt and oxide inclusions that cause quality problems in aluminum today.
Refiners react to changes in the pipeline infrastructure
Giles, K.A.
1997-06-01
Petroleum pipelines have long been a critical component in the distribution of crude and refined products in the U.S. Pipelines are typically the most cost efficient mode of transportation for reasonably consistent flow rates. For obvious reasons, inland refineries and consumers are much more dependent on petroleum pipelines to provide supplies of crude and refined products than refineries and consumers located on the coasts. Significant changes in U.S. distribution patterns for crude and refined products are reshaping the pipeline infrastructure and presenting challenges and opportunities for domestic refiners. These changes are discussed.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Refined seismic stratigraphy in prograding carbonates
Pomar, L. )
1991-03-01
Complete exposure of the upper Miocene Reef Complex in the sea cliffs of Mallorca (Spain) allows for a more refined interpretation of seismic lines with similar progradational patterns. A 6 km long high-resolution cross section in the direction of reef progradation displays four hierarchical orders of accretional units. Although all these units are of higher order, they all exhibit similar characteristics as a third order depositional sequence and can likewise be interpreted as the result of high order sea-level cycles. The accretional units are composed of lagoonal horizontal beds, reefal sigmoids and gently dipping slope deposits. They are bounded by erosion surfaces at the top and basinwards by their correlative conformities. These architectural patterns are similar to progradational sequences seen on seismic lines. On seismic lines, the progradational pattern often shows the following geometrical details: (1) discontinuous climbing high-energy reflectors, (2) truncation of clinoforms by these high-energy reflectors with seaward dips, (3) transparent areas intercalated between clinoforms. Based on facies distribution in the outcrops of Mallorca the high-energy reflectors are interpreted as sectors where the erosion surfaces truncated the reef wall and are overlain by lagoonal sediments deposited during the following sealevel rise. The more transparent zones seem to correspond with areas of superposition of undifferentiated lagoonal beds. Offlapping geometries can also be detected in highest quality seismic lines. The comparison between seismic and outcrop data provides a more accurate prediction of lithologies, facies distribution, and reservoir properties on seismic profiles.
Reitveld refinement study of PLZT ceramics
NASA Astrophysics Data System (ADS)
Kumar, Rakesh; Bavbande, D. V.; Mishra, R.; Bafna, V. H.; Mohan, D.; Kothiyal, G. P.
2013-02-01
PLZT ceramics of composition Pb0.93La0.07(Zr0.60Ti0.40)O3, have been milled for 6hrs and 24hrs were prepared by solid state synthesis route. The 6hrs milled and 24hrs milled samples are represented as PLZT-6 and PLZT-24 ceramics respectively. X-ray diffraction (XRD) pattern was recorded at room temperature. The XRD pattern has been analyzed by employing Rietveld refinement method. Phase identification shows that all the peaks observed in PLZT-6 and PLZT-24 ceramics could be indexed to P4mm space group with tetragonal symmetry. The unit cell parameters of 6hrs milled PLZT ceramics are found to be a=b=4.0781(5)Å and c=4.0938(7)Å and for 24hrs milled PLZT ceramics unit cell parameters are a=b=4.0679(4)Å and c=4.1010(5)Å . The axial ratio c/a and unit cell volume of PLZT-6 are 1.0038 and 68.09(2)Å3 respectively. In PLZT-24 samples, the axial ratio c/a value is 1.0080 which is little more than that of the 6hr milled PLZT sample whereas the unit cell volume decrease to 67.88 (1) Å3. An average crystallite size was estimated by using Scherrer's formula. Dielectric properties were obtained by measuring the capacitance and tand loss using Stanford LCR meter.
Refining and blending of aviation turbine fuels.
White, R D
1999-02-01
Aviation turbine fuels (jet fuels) are similar to other petroleum products that have a boiling range of approximately 300F to 550F. Kerosene and No.1 grades of fuel oil, diesel fuel, and gas turbine oil share many similar physical and chemical properties with jet fuel. The similarity among these products should allow toxicology data on one material to be extrapolated to the others. Refineries in the USA manufacture jet fuel to meet industry standard specifications. Civilian aircraft primarily use Jet A or Jet A-1 fuel as defined by ASTM D 1655. Military aircraft use JP-5 or JP-8 fuel as defined by MIL-T-5624R or MIL-T-83133D respectively. The freezing point and flash point are the principle differences between the finished fuels. Common refinery processes that produce jet fuel include distillation, caustic treatment, hydrotreating, and hydrocracking. Each of these refining processes may be the final step to produce jet fuel. Sometimes blending of two or more of these refinery process streams are needed to produce jet fuel that meets the desired specifications. Chemical additives allowed for use in jet fuel are also defined in the product specifications. In many cases, the customer rather than the refinery will put additives into the fuel to meet their specific storage or flight condition requirements.
Astrocytes refine cortical connectivity at dendritic spines
Risher, W Christopher; Patel, Sagar; Kim, Il Hwan; Uezu, Akiyoshi; Bhagat, Srishti; Wilton, Daniel K; Pilaz, Louis-Jan; Singh Alvarado, Jonnathan; Calhan, Osman Y; Silver, Debra L; Stevens, Beth; Calakos, Nicole; Soderling, Scott H; Eroglu, Cagla
2014-01-01
During cortical synaptic development, thalamic axons must establish synaptic connections despite the presence of the more abundant intracortical projections. How thalamocortical synapses are formed and maintained in this competitive environment is unknown. Here, we show that astrocyte-secreted protein hevin is required for normal thalamocortical synaptic connectivity in the mouse cortex. Absence of hevin results in a profound, long-lasting reduction in thalamocortical synapses accompanied by a transient increase in intracortical excitatory connections. Three-dimensional reconstructions of cortical neurons from serial section electron microscopy (ssEM) revealed that, during early postnatal development, dendritic spines often receive multiple excitatory inputs. Immuno-EM and confocal analyses revealed that majority of the spines with multiple excitatory contacts (SMECs) receive simultaneous thalamic and cortical inputs. Proportion of SMECs diminishes as the brain develops, but SMECs remain abundant in Hevin-null mice. These findings reveal that, through secretion of hevin, astrocytes control an important developmental synaptic refinement process at dendritic spines. DOI: http://dx.doi.org/10.7554/eLife.04047.001 PMID:25517933
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1988-05-17
Apparatus is described for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom. 2 figs.
Steel refining with an electrochemical cell
Blander, Milton; Cook, Glenn M.
1988-01-01
Apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight oxygen and not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Steel refining with an electrochemical cell
Blander, M.; Cook, G.M.
1985-05-21
Disclosed is an apparatus for processing a metallic fluid containing iron oxide, container for a molten metal including an electrically conductive refractory disposed for contact with the molten metal which contains iron oxide, an electrolyte in the form of a basic slag on top of the molten metal, an electrode in the container in contact with the slag electrically separated from the refractory, and means for establishing a voltage across the refractory and the electrode to reduce iron oxide to iron at the surface of the refractory in contact with the iron oxide containing fluid. A process is disclosed for refining an iron product containing not more than about 10% by weight sulfur, comprising providing an electrolyte of a slag containing one or more of calcium oxide, magnesium oxide, silica or alumina, providing a cathode of the iron product in contact with the electrolyte, providing an anode in contact with the electrolyte electrically separated from the cathode, and operating an electrochemical cell formed by the anode, the cathode and the electrolyte to separate oxygen or sulfur present in the iron product therefrom.
Refined Pichia pastoris reference genome sequence.
Sturmberger, Lukas; Chappell, Thomas; Geier, Martina; Krainer, Florian; Day, Kasey J; Vide, Ursa; Trstenjak, Sara; Schiefer, Anja; Richardson, Toby; Soriaga, Leah; Darnhofer, Barbara; Birner-Gruenberger, Ruth; Glick, Benjamin S; Tolstorukov, Ilya; Cregg, James; Madden, Knut; Glieder, Anton
2016-10-10
Strains of the species Komagataella phaffii are the most frequently used "Pichia pastoris" strains employed for recombinant protein production as well as studies on peroxisome biogenesis, autophagy and secretory pathway analyses. Genome sequencing of several different P. pastoris strains has provided the foundation for understanding these cellular functions in recent genomics, transcriptomics and proteomics experiments. This experimentation has identified mistakes, gaps and incorrectly annotated open reading frames in the previously published draft genome sequences. Here, a refined reference genome is presented, generated with genome and transcriptome sequencing data from multiple P. pastoris strains. Twelve major sequence gaps from 20 to 6000 base pairs were closed and 5111 out of 5256 putative open reading frames were manually curated and confirmed by RNA-seq and published LC-MS/MS data, including the addition of new open reading frames (ORFs) and a reduction in the number of spliced genes from 797 to 571. One chromosomal fragment of 76kbp between two previous gaps on chromosome 1 and another 134kbp fragment at the end of chromosome 4, as well as several shorter fragments needed re-orientation. In total more than 500 positions in the genome have been corrected. This reference genome is presented with new chromosomal numbering, positioning ribosomal repeats at the distal ends of the four chromosomes, and includes predicted chromosomal centromeres as well as the sequence of two linear cytoplasmic plasmids of 13.1 and 9.5kbp found in some strains of P. pastoris.
Spatially Refined Aerosol Direct Radiative Forcing Efficiencies
NASA Technical Reports Server (NTRS)
Henze, Daven K.; Shindell, Drew Todd; Akhtar, Farhan; Spurr, Robert J. D.; Pinder, Robert W.; Loughlin, Dan; Kopacz, Monika; Singh, Kumaresh; Shim, Changsub
2012-01-01
Global aerosol direct radiative forcing (DRF) is an important metric for assessing potential climate impacts of future emissions changes. However, the radiative consequences of emissions perturbations are not readily quantified nor well understood at the level of detail necessary to assess realistic policy options. To address this challenge, here we show how adjoint model sensitivities can be used to provide highly spatially resolved estimates of the DRF from emissions of black carbon (BC), primary organic carbon (OC), sulfur dioxide (SO2), and ammonia (NH3), using the example of emissions from each sector and country following multiple Representative Concentration Pathway (RCPs). The radiative forcing efficiencies of many individual emissions are found to differ considerably from regional or sectoral averages for NH3, SO2 from the power sector, and BC from domestic, industrial, transportation and biomass burning sources. Consequently, the amount of emissions controls required to attain a specific DRF varies at intracontinental scales by up to a factor of 4. These results thus demonstrate both a need and means for incorporating spatially refined aerosol DRF into analysis of future emissions scenario and design of air quality and climate change mitigation policies.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Refining and End Use Study of Coal Liquids
1997-10-01
This report summarizes revisions to the design basis for the linear programing refining model that is being used in the Refining and End Use Study of Coal Liquids. This revision primarily reflects the addition of data for the upgrading of direct coal liquids.
Carpet: Adaptive Mesh Refinement for the Cactus Framework
NASA Astrophysics Data System (ADS)
Schnetter, Erik; Hawley, Scott; Hawke, Ian
2016-11-01
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
Optimization of Refining Craft for Vegetable Insulating Oil
NASA Astrophysics Data System (ADS)
Zhou, Zhu-Jun; Hu, Ting; Cheng, Lin; Tian, Kai; Wang, Xuan; Yang, Jun; Kong, Hai-Yang; Fang, Fu-Xin; Qian, Hang; Fu, Guang-Pan
2016-05-01
Vegetable insulating oil because of its environmental friendliness are considered as ideal material instead of mineral oil used for the insulation and the cooling of the transformer. The main steps of traditional refining process included alkali refining, bleaching and distillation. This kind of refining process used in small doses of insulating oil refining can get satisfactory effect, but can't be applied to the large capacity reaction kettle. This paper using rapeseed oil as crude oil, and the refining process has been optimized for large capacity reaction kettle. The optimized refining process increases the acid degumming process. The alkali compound adds the sodium silicate composition in the alkali refining process, and the ratio of each component is optimized. Add the amount of activated clay and activated carbon according to 10:1 proportion in the de-colorization process, which can effectively reduce the oil acid value and dielectric loss. Using vacuum pumping gas instead of distillation process can further reduce the acid value. Compared some part of the performance parameters of refined oil products with mineral insulating oil, the dielectric loss of vegetable insulating oil is still high and some measures are needed to take to further optimize in the future.
Refined Freeman-Durden for Harvest Detection using POLSAR data
NASA Astrophysics Data System (ADS)
Taghvakish, Sina
To keep up with an ever increasing human population, providing food is one of the main challenges of the current century. Harvest detection, as an input for decision making, is an important task for food management. Traditional harvest detection methods that rely on field observations need intensive labor, time and money. Therefore, since their introduction in early 60s, optical remote sensing enhanced the process dramatically. But having weaknesses such as cloud cover and temporal resolution, alternative methods were always welcomed. Synthetic Aperture Radar (SAR) on the other hand, with its ability to penetrate cloud cover with the addition of full polarimetric observations could be a good source of data for exploration in agricultural studies. SAR has been used successfully for harvest detection in rice paddy fields. However, harvest detection for other crops without a smooth underlying water surface is much more difficult. The objective of this project is to find a fully-automated algorithm to perform harvest detection using POLSAR image data for soybean and corn. The proposed method is a fusion of Freeman-Durden and H/A/alphadecompositions. The Freeman-Durden algorithm is a decomposition based on three-component physical scattering model. On the other hand, the H/A/alpha parameters are mathematical parameters used to define a three-dimensional space that may be subdivided with scattering mechanism interpretations. The Freeman-Durden model has a symmetric formulation for two of its three scattering mechanisms. On the other hand the surface scattering component used by Freeman-Durden model is only applicable to Bragg surface scattering fields which are not the dominant case in agricultural fields. H/A/alpha can contribute to both of these issues. Based on the RADARSAT-2 images incidence angle, our field based refined Freeman-Durden model and a proposed roughness measure aims to discriminate harvested from senesced crops. We achieved 99.08 percent overall
Evaluation of the tool "Reg Refine" for user-guided deformable image registration.
Johnson, Perry B; Padgett, Kyle R; Chen, Kuan L; Dogan, Nesrin
2016-05-01
"Reg Refine" is a tool available in the MIM Maestro v6.4.5 platform (www.mimsoftware.com) that allows the user to actively participate in the deformable image registration process. The purpose of this work was to evaluate the efficacy of this tool and investigate strategies for how to apply it effectively. This was done by performing DIR on two publicly available ground-truth models, the Pixel-based Breathing Thorax Model (POPI) for lung, and the Deformable Image Registration Evaluation Project (DIREP) for head and neck. Image noise matched in both magnitude and texture to clinical CBCT scans was also added to each model to simulate the use case of CBCT-CT alignment. For lung, the results showed Reg Refine effective at improving registration accuracy when controlled by an expert user within the context of large lung deformation. CBCT noise was also shown to have no effect on DIR performance while using the MIM algorithm for this site. For head and neck, the results showed CBCT noise to have a large effect on the accuracy of registration, specifically for low-contrast structures such as the brainstem and parotid glands. In these cases, the Reg Refine tool was able to improve the registration accuracy when controlled by an expert user. Several strategies for how to achieve these results have been outlined to assist other users and provide feedback for developers of similar tools. PACS number(s): 87.44.Qr, 87.57.nj, 87.57.c.
Schröder, Gunnar F.; Brunger, Axel T.; Levitt, Michael
2008-01-01
Summary Structural studies of large proteins and protein assemblies are a difficult and pressing challenge in molecular biology. Experiments often yield only low-resolution or sparse data which are not sufficient to fully determine atomistic structures. We have developed a general geometry-based algorithm that efficiently samples conformational space under constraints imposed by low-resolution density maps obtained from electron microscopy or X-ray crystallography experiments. A deformable elastic network (DEN) is used to restrain the sampling to prior knowledge of an approximate structure. The DEN restraints dramatically reduce over-fitting, especially at low resolution. Cross-validation is used to optimally weight the structural information and experimental data. Our algorithm is robust even for noise-added density maps and has a large radius of convergence for our test case. The DEN restraints can also be used to enhance reciprocal space simulated annealing refinement. PMID:18073112
Refining primary lead by granulation-leaching-electrowinning
NASA Astrophysics Data System (ADS)
Ojebuoboh, F.; Wang, S.; Maccagni, M.
2003-04-01
This article describes the development of a new process in which lead bullion obtained from smelting concentrates is refined by leaching-electrowinning. In the last half century, the challenge to treat and refine lead in order to minimize emissions of lead and lead compounds has intensified. Within the primary lead industry, the treatment aspect has transformed from the sinter-blast furnace model to direct smelting, creating gains in hygiene, environmental control, and efficiency. The refining aspect has remained based on kettle refining, or to a lesser extent, the Betts electrolytic refining. In the mid-1990s, Asarco investigated a concept based on granulating the lead bullion from the blast furnace. The granular material was fed into the Engitec Fluobor process. This work resulted in the operation of a 45 kg/d pilot plant that could produce lead sheets of 99.9% purity.
Two-stage band selection algorithm for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Velez-Reyes, Miguel; Linares, Daphnia M.; Jimenez-Rodriguez, Luis O.
2002-08-01
This paper presents a two-stage band optimal band selection algorithm for hyperspectral imagery. The algorithm tries to compute the closest subset of bands to the principal components in the sense of having the smallest canonical correlation. The first stage of the algorithm computes and initial guess for the closest bands using matrix-factorization-based band subset selection. The second stage refines the subset of bands using a steepest ascent algorithm. Experimental results using AVIRIS imagery from the Cuprite Mining District are presented.
Refined structures of mouse P-glycoprotein
Li, Jingzhi; Jaimes, Kimberly F; Aller, Stephen G
2014-01-01
The recently determined C. elegans P-glycoprotein (Pgp) structure revealed significant deviations compared to the original mouse Pgp structure, which suggested possible misinterpretations in the latter model. To address this concern, we generated an experimental electron density map from single-wavelength anomalous dispersion phasing of an original mouse Pgp dataset to 3.8 Å resolution. The map exhibited significantly more detail compared to the original MAD map and revealed several regions of the structure that required de novo model building. The improved drug-free structure was refined to 3.8 Å resolution with a 9.4 and 8.1% decrease in Rwork and Rfree, respectively, (Rwork = 21.2%, Rfree = 26.6%) and a significant improvement in protein geometry. The improved mouse Pgp model contains ∼95% of residues in the favorable Ramachandran region compared to only 57% for the original model. The registry of six transmembrane helices was corrected, revealing amino acid residues involved in drug binding that were previously unrecognized. Registry shifts (rotations and translations) for three transmembrane (TM)4 and TM5 and the addition of three N-terminal residues were necessary, and were validated with new mercury labeling and anomalous Fourier density. The corrected position of TM4, which forms the frame of a portal for drug entry, had backbone atoms shifted >6 Å from their original positions. The drug translocation pathway of mouse Pgp is 96% identical to human Pgp and is enriched in aromatic residues that likely play a collective role in allowing a high degree of polyspecific substrate recognition. PMID:24155053
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false How does a refiner obtain approval as a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... preceding January 1, 2000; and the type of business activities carried out at each location; or (ii) In...
40 CFR 80.551 - How does a refiner obtain approval as a small refiner under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false How does a refiner obtain approval as a small refiner under this subpart? 80.551 Section 80.551 Protection of Environment ENVIRONMENTAL... preceding January 1, 2000; and the type of business activities carried out at each location; or (ii) In...
NASA Astrophysics Data System (ADS)
Feng, F.; Zhu, J.; Zhang, A.
2005-07-01
The structural parameters of La[0.67]Ca[0.33]MnO[3] were refined using one-dimensional HOLZ intensities by the QCBED method. It is feasible to obtain reliable structure information by this method and the global optimization algorithm.
Andrews, Erik; Wang, Yue; Xia, Tian; Cheng, Wenqing; Cheng, Chao
2017-01-01
Gene expression regulators, such as transcription factors (TFs) and microRNAs (miRNAs), have varying regulatory targets based on the tissue and physiological state (context) within which they are expressed. While the emergence of regulator-characterizing experiments has inferred the target genes of many regulators across many contexts, methods for transferring regulator target genes across contexts are lacking. Further, regulator target gene lists frequently are not curated or have permissive inclusion criteria, impairing their use. Here, we present a method called iterative Contextual Transcriptional Activity Inference of Regulators (icTAIR) to resolve these issues. icTAIR takes a regulator’s previously-identified target gene list and combines it with gene expression data from a context, quantifying that regulator’s activity for that context. It then calculates the correlation between each listed target gene’s expression and the quantitative score of regulatory activity, removes the uncorrelated genes from the list, and iterates the process until it derives a stable list of refined target genes. To validate and demonstrate icTAIR’s power, we use it to refine the MSigDB c3 database of TF, miRNA and unclassified motif target gene lists for breast cancer. We then use its output for survival analysis with clinicopathological multivariable adjustment in 7 independent breast cancer datasets covering 3,430 patients. We uncover many novel prognostic regulators that were obscured prior to refinement, in particular NFY, and offer a detailed look at the composition and relationships among the breast cancer prognostic regulome. We anticipate icTAIR will be of general use in contextually refining regulator target genes for discoveries across many contexts. The icTAIR algorithm can be downloaded from https://github.com/icTAIR. PMID:28103241
Structural refinement of Pbnm-type perovskite films from analysis of half-order diffraction peaks
NASA Astrophysics Data System (ADS)
Brahlek, M.; Choquette, A. K.; Smith, C. R.; Engel-Herbert, R.; May, S. J.
2017-01-01
Engineering structural modifications of epitaxial perovskite thin films is an effective route to induce new functionalities or enhance existing properties due to the close relation of the electronic ground state to the local bonding environment. As such, there is a necessity to systematically refine and precisely quantify these structural displacements, particularly those of the oxygen octahedra, which is a challenge due to the weak scattering factor of oxygen and the small diffraction volume of thin films. Here, we present an optimized algorithm to refine the octahedral rotation angles using specific unit-cell-doubling half-order diffraction peaks for the a-a-c+ Pbnm structure. The oxygen and A-site positions can be obtained by minimizing the squared-error between calculated and experimentally determined peak intensities using the (1/2 1/2 3/2) and (1/2 1/2 5/2) reflections to determine the rotation angle α about in-plane axes and the (1/2 5/2 1), (1/2 3/2 1), and (1/2 3/2 2) reflections to determine the rotation angle γ about the out-of-plane axis, whereas the convoluting A-site displacements associated with the octahedral rotation pattern can be determined using (1 1 1/2) and (1/2 1/2 1/2) reflections to independently determine A-site positions. The validity of the approach is confirmed by applying the refinement procedure to determine the A-site and oxygen displacements in a NdGaO3 single crystal. The ability to refine both the oxygen and A-site displacements relative to the undistorted perovskite structure enables a deeper understanding of how structural modifications alter functionality properties in epitaxial films exhibiting this commonly occurring crystal structure.
Mesh refinement in finite element analysis by minimization of the stiffness matrix trace
NASA Technical Reports Server (NTRS)
Kittur, Madan G.; Huston, Ronald L.
1989-01-01
Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Polytomy refinement for the correction of dubious duplications in gene trees
Lafond, Manuel; El-Mabrouk, Nadia
2014-01-01
Motivation: Large-scale methods for inferring gene trees are error-prone. Correcting gene trees for weakly supported features often results in non-binary trees, i.e. trees with polytomies, thus raising the natural question of refining such polytomies into binary trees. A feature pointing toward potential errors in gene trees are duplications that are not supported by the presence of multiple gene copies. Results: We introduce the problem of refining polytomies in a gene tree while minimizing the number of created non-apparent duplications in the resulting tree. We show that this problem can be described as a graph-theoretical optimization problem. We provide a bounded heuristic with guaranteed optimality for well-characterized instances. We apply our algorithm to a set of ray-finned fish gene trees from the Ensembl database to illustrate its ability to correct dubious duplications. Availability and implementation: The C++ source code for the algorithms and simulations described in the article are available at http://www-ens.iro.umontreal.ca/~lafonman/software.php. Contact: lafonman@iro.umontreal.ca or mabrouk@iro.umontreal.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161242
ADER-WENO finite volume schemes with space-time adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.
2013-09-01
We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.
Robust ego-motion estimation and 3-D model refinement using surface parallax.
Agrawal, Amit; Chellappa, Rama
2006-05-01
We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm.
Effective soft-decision demosaicking using directional filtering and embedded artifact refinement
NASA Astrophysics Data System (ADS)
Huang, Wen-Tsung; Chen, Wen-Jan; Tai, Shen-Chuan
2009-04-01
Demosaicking is an interpolation process that transforms a color filter array (CFA) image into a full-color image in a single-sensor imaging pipeline. In all demosaicking techniques, the interpolation of the green components plays a central role in dictating the visual quality of reconstructed images because green light is of maximum sensitivity in the human visual system. Guided by this point, we propose a new soft-decision demosaicking algorithm using directional filtering and embedded artifact refinement. The novelty of this approach is twofold. First, we lift the constraint of the Bayer CFA that results in the absence of diagonal neighboring green color values for directionally recovering diagonal edges. The developed directional interpolation method is fairly robust in dealing with the four edge features, namely, vertical, horizontal, 45-deg diagonal, and 135-deg diagonal. In addition, the proposed embedded refinement scheme provides an efficient way for soft-decision-based algorithms to achieve improved results with fewer computations. We have compared this new approach to six state-of-the-art methods, and it can outstandingly preserve more edge details and handle fine textures well, without requiring a high computational cost.
NASA Astrophysics Data System (ADS)
Bottegoni, Giovanni; Kufareva, Irina; Totrov, Maxim; Abagyan, Ruben
2008-05-01
Protein binding sites undergo ligand specific conformational changes upon ligand binding. However, most docking protocols rely on a fixed conformation of the receptor, or on the prior knowledge of multiple conformations representing the variation of the pocket, or on a known bounding box for the ligand. Here we described a general induced fit docking protocol that requires only one initial pocket conformation and identifies most of the correct ligand positions as the lowest score. We expanded a previously used diverse "cross-docking" benchmark to thirty ligand-protein pairs extracted from different crystal structures. The algorithm systematically scans pairs of neighbouring side chains, replaces them by alanines, and docks the ligand to each `gapped' version of the pocket. All docked positions are scored, refined with original side chains and flexible backbone and re-scored. In the optimal version of the protocol pairs of residues were replaced by alanines and only one best scoring conformation was selected from each `gapped' pocket for refinement. The optimal SCARE (SCan Alanines and REfine) protocol identifies a near native conformation (under 2 Å RMSD) as the lowest rank for 80% of pairs if the docking bounding box is defined by the predicted pocket envelope, and for as many as 90% of the pairs if the bounding box is derived from the known answer with ˜5 Å margin as used in most previous publications. The presented fully automated algorithm takes about 2 h per pose of a single processor time, requires only one pocket structure and no prior knowledge about the binding site location. Furthermore, the results for conformationally conserved pockets do not deteriorate due to substantial increase of the pocket variability.
Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling
NASA Astrophysics Data System (ADS)
Davis, B. N.; LeVeque, R. J.
2016-12-01
One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
GAMER: GPU-accelerated Adaptive MEsh Refinement code
NASA Astrophysics Data System (ADS)
Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong
2016-12-01
GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.
Adaptive mesh refinement strategies in isogeometric analysis— A computational comparison
NASA Astrophysics Data System (ADS)
Hennig, Paul; Kästner, Markus; Morgenstern, Philipp; Peterseim, Daniel
2017-04-01
We explain four variants of an adaptive finite element method with cubic splines and compare their performance in simple elliptic model problems. The methods in comparison are Truncated Hierarchical B-splines with two different refinement strategies, T-splines with the refinement strategy introduced by Scott et al. in 2012, and T-splines with an alternative refinement strategy introduced by some of the authors. In four examples, including singular and non-singular problems of linear elasticity and the Poisson problem, the H1-errors of the discrete solutions, the number of degrees of freedom as well as sparsity patterns and condition numbers of the discretized problem are compared.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
DT-REFinD: diffusion tensor registration with exact finite-strain differential.
Yeo, B T Thomas; Vercauteren, Tom; Fillard, Pierre; Peyrat, Jean-Marc; Pennec, Xavier; Golland, Polina; Ayache, Nicholas; Clatz, Olivier
2009-12-01
In this paper, we propose the DT-REFinD algorithm for the diffeomorphic nonlinear registration of diffusion tensor images. Unlike scalar images, deforming tensor images requires choosing both a reorientation strategy and an interpolation scheme. Current diffusion tensor registration algorithms that use full tensor information face difficulties in computing the differential of the tensor reorientation strategy and consequently, these methods often approximate the gradient of the objective function. In the case of the finite-strain (FS) reorientation strategy, we borrow results from the pose estimation literature in computer vision to derive an analytical gradient of the registration objective function. By utilizing the closed-form gradient and the velocity field representation of one parameter subgroups of diffeomorphisms, the resulting registration algorithm is diffeomorphic and fast. We contrast the algorithm with a traditional FS alternative that ignores the reorientation in the gradient computation. We show that the exact gradient leads to significantly better registration at the cost of computation time. Independently of the choice of Euclidean or Log-Euclidean interpolation and sum of squared differences dissimilarity measure, the exact gradient achieves better alignment over an entire spectrum of deformation penalties. Alignment quality is assessed with a battery of metrics including tensor overlap, fractional anisotropy, inverse consistency and closeness to synthetic warps. The improvements persist even when a different reorientation scheme, preservation of principal directions, is used to apply the final deformations.
General shot refinement technique on fracturing of curvilinear shape for VSB mask writer
NASA Astrophysics Data System (ADS)
Tao, Takuya; Takahashi, Nobuyasu; Hamaji, Masakazu; Park, Jisoong; Lee, Sukho; Park, Sunghoon
2014-10-01
The increasing complexity of RET solutions has increased the shot count for advanced photomasks. In particular, the introduction of the inverse lithography technique (ILT) brings a significant increase in mask complexity and conventional fracturing algorithms generate many more shots because they are not optimized for curvilinear shapes. Several methods have been proposed to reduce shot count for ILT photomasks. One of the stronger approaches is model-based fracturing, which utilizes precise dose control, shot overlaps and many other techniques. However, it requires much more computation resources and upgrades to the EB mask writer to support user-level dose modulation and shot overlaps. We proposed an efficient algorithm to fracture curvy shapes into VSB shots5 which was based on geometry processing. The algorithm achieved better EPE and reasonable process time compared with a conventional fracturing algorithm but its fracturing quality can be degraded for the pattern which has relatively rough contour though it is curvy ILT pattern. In this paper, we present a couple of general techniques to refine a set of VSB shots to reduce edge placement error (EPE) to an original curvy contour with their experimental results.
RETRACTED ARTICLE: Microstructural evolution of AA7449 aerospace alloy refined by intensive shearing
NASA Astrophysics Data System (ADS)
Haghayeghi, R.; Nastac, L.
2012-10-01
Many aerospace alloys are sensitive to their composition thus cannot be chemically grain refined. In addition, only 1% grain refiners can act as nuclei for refining the structure. In this paper, physical refinement by intensive shearing above liquidus as an alternative technique will be investigated for AA7449 aerospace alloy. The results can open a new gateway for aerospace industry for refining their microstructure.
Unsupervised motion-based object segmentation refined by color
NASA Astrophysics Data System (ADS)
Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris
2003-06-01
chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known
GRAIL Refinements to Lunar Seismic Structure
NASA Technical Reports Server (NTRS)
Weber, Renee; Gernero, Edward; Lin, Pei-Ying; Thorne, Michael; Schmerr, Nicholas; Han, Shin-Chan
2012-01-01
such as moonquake location, timing errors, and potential seismic heterogeneities. In addition, the modeled velocities may vary with a 1-to-1 trade ]off with the modeled reflector depth. The GRAIL (Gravity Recovery and Interior Laboratory) mission, launched in Sept. 2011, placed two nearly identical spacecraft in lunar orbit. The two satellites make extremely high-resolution measurements of the lunar gravity field, which can be used to constrain the interior structure of the Moon using a "crust to core" approach. GRAIL fs constraints on crustal thickness, mantle structure, core radius and stratification, and core state (solid vs. molten) will complement seismic investigations in several ways. Here we present a progress report on our efforts to advance our knowledge of the Moon fs internal structure using joint gravity and seismic analyses. We will focus on methodology, including 1) refinements to the seismic core constraint accomplished through array processing of Apollo seismic data, made by applying a set of travel time corrections based on GRAIL structure estimates local to each Apollo seismic station; 2) modeling deep lunar structure through synthetic seismograms, to test whether the seismic core model can reproduce the core reflections observed in the Apollo seismograms; and 3) a joint seismic and gravity inversion in which we attempt to fit a family of seismic structure models with the gravity constraints from GRAIL, resulting in maps of seismic velocities and densities that vary from a nominal model both laterally and with depth.
Refined contour analysis of giant unilamellar vesicles
NASA Astrophysics Data System (ADS)
Pécréaux, J.; Döbereiner, H.-G.; Prost, J.; Joanny, J.-F.; Bassereau, P.
2004-03-01
The fluctuation spectrum of giant unilamellar vesicles is measured using a high-resolution contour detection technique. An analysis at higher q vectors than previously achievable is now possible due to technical improvements of the experimental setup and of the detection algorithm. The global fluctuation spectrum is directly fitted to deduce the membrane tension and the bending modulus of lipid membranes. Moreover, we show that the planar analysis of fluctuations is valid for spherical objects, even at low wave vectors. Corrections due to the integration time of the video camera and to the section of a 3D object by the observation plane are introduced. A precise calculation of the error bars has been done in order to provide reliable error estimate. Eventually, using this technique, we have measured bending moduli for EPC, SOPC and \\chem{SOPC:CHOL} membranes confirming previously published values. An interesting application of this technique can be the measurement of the fluctuation spectra for non-equilibrium membranes, such as “active membranes”.
Risk Assessment: Perchloroethylene Dry Cleaners Refined Human Health Risk Characterization
This November 2005 memo and appendices describe the methods by which EPA conducted its refined risk assessment of the Major Source and Area Source facilities within the perchloroethylene (perc) dry cleaners source category.
CITGO Petroleum Corporation and PDV Midwest Refining, LLC Settlement
CITGO Petroleum Corporation and PDV Midwest Refining, LLC (collectively, CITGO) have agreed to pay a $1,955,000 civil penalty, perform environmental projects totaling more than $2 million, and spend an estimated $42 million in injunctive relief to resolve.
Rack gasoline and refining margins - wanted: a summer romance
Not Available
1988-04-13
For the first time since late 1987, apparent refining margins on the US benchmark crude oil (based on spot purchase prices) are virtually zero. This felicitous bit of news comes loaded with possibilities of positive (maybe even good.) margins in coming months, if the differential between crude buying prices and the value of the refined barrel continues to improve. What refiners in the US market are watching most closely right now are motorists. This issue also contains the following: (1) ED refining netback data for the US Gulf and Western Coasts, Rotterdam, and Singapore, prices for early April 1988; and (2) ED fuel price/tax series for countries of the Western Hemisphere, April 1988 edition. 5 figures, 5 tables.
QM/MM X-ray Refinement of Zinc Metalloenzymes
Li, Xue; Hayik, Seth A.; Merz, Kenneth M.
2010-01-01
Zinc metalloenzymes play an important role in biology. However, due to the limitation of molecular force field energy restraints used in X-ray refinement at medium or low resolutions, the precise geometry of the zinc coordination environment can be difficult to distinguish from ambiguous electron density maps. Due to the difficulties involved in defining accurate force fields for metal ions, the QM/MM (Quantum-Mechanical /Molecular-Mechanical) method provides an attractive and more general alternative for the study and refinement of metalloprotein active sites. Herein we present three examples that indicate that QM/MM based refinement yields a superior description of the crystal structure based on R and Rfree values and on the inspection of the zinc coordination environment. It is concluded that QM/MM refinement is a useful general tool for the improvement of the metal coordination sphere in metalloenzyme active sites. PMID:20116858
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Local block refinement with a multigrid flow solver
NASA Astrophysics Data System (ADS)
Lange, C. F.; Schäfer, M.; Durst, F.
2002-01-01
A local block refinement procedure for the efficient computation of transient incompressible flows with heat transfer is presented. The procedure uses patched structured grids for the blockwise refinement and a parallel multigrid finite volume method with colocated primitive variables to solve the Navier-Stokes equations. No restriction is imposed on the value of the refinement rate and non-integer rates may also be used. The procedure is analysed with respect to its sensitivity to the refinement rate and to the corresponding accuracy. Several applications exemplify the advantages of the method in comparison with a common block structured grid approach. The results show that it is possible to achieve an improvement in accuracy with simultaneous significant savings in computing time and memory requirements. Copyright
Refiners match Rvp reduction measures to operating problems
Musumeci, J.
1997-02-03
Reduction in gasoline vapor pressure specifications have created operational challenges for many refiners. Removal of butanes from gasoline blendstocks has become more critical to meeting product vapor pressure requirements. Some refiners have made major unit modifications, such as adding alkylation capacity for butane conversion. Others have debottlenecked existing fractionation equipment, thus increasing butane removal. Three case studies will illustrate vapor pressure reduction solutions. The solutions include: changing unit operating targets, maintaining existing equipment, and debottlenecking minor equipment.
Adaptive Local Grid Refinement in Computational Fluid Mechanics.
1987-11-01
Adaptive mesh refinements in reservoir simulation applications (R.E. Ew- ing), Proceedings Intl. Conference on Accuracy Est. and Adaptive Refine... reservoir simulation (R.E. Ewing and .J.V. 1{oebbe), Innovati’ve Numerical Mlethods in Engineering, (R.P. Shaw, J. Pc- riaux, A. Chaudouet, J. Wu...Universities, Cheyenne, Wyoming, February 21, 1986, O 9. Finite element techniques for reservoir simulation , Fourth International Sym- posium on Numerical
Mesh Generation via Local Bisection Refinement of Triangulated Grids
2015-06-01
UNCLASSIFIED Mesh Generation via Local Bisection Refinement of Triangulated Grids Jason R. Looker Joint and Operations Analysis Division Defence...Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...relatively simple to implement, has the capacity to quickly generate a refined mesh with triangles that rapidly change size over a short distance, and does
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN ...
VIEW OF RBC (REFINED BICARBONATE) BUILDING LOOKING NORTHEAST. DEMOLITION IN PROGRESS. "ARM & HAMMER BAKING SODA WAS MADE HERE FOR OVER 50 YEARS AND THEN SHIPPED ACROSS THE STREET TO THE CHURCH & DWIGHT PLANT ON WILLIS AVE. (ON THE RIGHT IN THIS PHOTO). LAYING ON THE GROUND IN FRONT OF C&D BUILDING IS PART OF AN RBC DRYING TOWER. - Solvay Process Company, Refined Bicarbonate Building, Between Willis & Milton Avenues, Solvay, Onondaga County, NY
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve
2012-01-01
Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Lietzke, S. E.; Scavetta, R. D.; Yoder, M. D.; Jurnak, F.
1996-01-01
The crystal structure of pectate lyase E (PelE; EC 4.2.2.2) from the enterobacteria Erwinia chrysanthemi has been refined by molecular dynamics techniques to a resolution of 2.2 A and an R factor (an agreement factor between observed structure factor amplitudes) of 16.1%. The final model consists of all 355 amino acids and 157 water molecules. The root-mean-square deviation from ideality is 0.009 A for bond lengths and 1.721[deg] for bond angles. The structure of PelE bound to a lanthanum ion, which inhibits the enzymatic activity, has also been refined and compared to the metal-free protein. In addition, the structures of pectate lyase C (PelC) in the presence and absence of a lutetium ion have been refined further using an improved algorithm for identifying waters and other solvent molecules. The two putative active site regions of PelE have been compared to those in the refined structure of PelC. The analysis of the atomic details of PelE and PelC in the presence and absence of lanthanide ions provides insight into the enzymatic mechanism of pectate lyases. PMID:12226275
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
NASA Astrophysics Data System (ADS)
Areias, P.; Rabczuk, T.; de Sá, J. César
2016-12-01
We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.
Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement
NASA Astrophysics Data System (ADS)
González, M.; Vaytet, N.; Commerçon, B.; Masson, J.
2015-06-01
Context. Radiative transfer plays a crucial role in the star formation process. Because of the high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multifrequency radiation-hydrodynamics models have started to be developed in an attempt to better account for the large variations in opacities as a function of frequency. Aims: We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Methods: Because of the prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilized bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. Results: We present a series of tests that demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a 3D proto-stellar collapse using 20 frequency groups. Differences between grey and multigroup results are briefly discussed, and the large amount of information this new method brings us is also illustrated. Conclusions: We have implemented a multigroup flux-limited diffusion algorithm in the RAMSES code. The method performed well against standard radiation-hydrodynamics tests, and was also shown to be ripe for exploitation in the computational star formation context.
NASA Astrophysics Data System (ADS)
Sifounakis, Adamandios; Lee, Sangseung; You, Donghyun
2016-12-01
A second-order-accurate finite-volume method is developed for the solution of incompressible Navier-Stokes equations on locally refined nested Cartesian grids. Numerical accuracy and stability on locally refined nested Cartesian grids are achieved using a finite-volume discretization of the incompressible Navier-Stokes equations based on higher-order conservation principles - i.e., in addition to mass and momentum conservation, kinetic energy conservation in the inviscid limit is used to guide the selection of the discrete operators and solution algorithms. Hanging nodes at the interface are virtually slanted to improve the pressure-velocity projection, while the other parts of the grid maintain an orthogonal Cartesian grid topology. The present method is straight-forward to implement and shows superior conservation of mass, momentum, and kinetic energy compared to the conventional methods employing interpolation at the interface between coarse and fine grids.
Shi, Kuangyu; Fürst, Sebastian; Sun, Liang; Lukas, Mathias; Navab, Nassir; Förster, Stefan; Ziegler, Sibylle I
2016-11-19
PET/MR is an emerging hybrid imaging modality. However, attenuation correction (AC) remains challenging for hybrid PET/MR in generating accurate PET images. Segmentation-based methods on special MR sequences are most widely recommended by vendors. However, their accuracy is usually not high. Individual refinement of available certified attenuation maps may be helpful for further clinical applications. In this study, we proposed a multi-resolution regional learning (MRRL) scheme to utilize the internal consistency of the patient data. The anatomical and AC MR sequences of the same subject were employed to guide the refinement of the provided AC maps. The developed algorithm was tested on 9 patients scanned consecutively with PET/MR and PET/CT (7 [(18)F]FDG and 2 [(18)F]FET). The preliminary results showed that MRRL can improve the accuracy of segmented attenuation maps and consequently the accuracy of PET reconstructions.
Refinement performance and mechanism of an Al-50Si alloy
Dai, H.S.; Liu, X.F.
2008-11-15
The microstructure and melt structure of primary silicon particles in an Al-50%Si (wt.%) alloy have been investigated by optical microscopy, scanning electron microscopy, electron probe micro-analysis and a high temperature X-ray diffractometer. The results show that the Al-50Si alloy can be effectively refined by a newly developed Si-20P master alloy, and the melting temperature is crucial to the refinement process. The minimal overheating degree {delta}T{sub min} ({delta}T{sub min} is the difference between the minimal overheating temperature T{sub min} and the liquidus temperature T{sub L}) for good refinement is about 260 deg. C. Primary silicon particles can be refined after adding 0.2 wt.% phosphorus amount at sufficient temperature, and their average size transforms from 2-4 mm to about 30 {mu}m. The X-ray diffraction data of the Al-50Si melt demonstrate that structural change occurs when the melting temperature varies from 1100 deg. C to 1300 deg. C. Additionally, the relationship between the refinement mechanism and the melt structure is discussed.
Production and Refining of Magnesium Metal from Turkey Originating Dolomite
NASA Astrophysics Data System (ADS)
Demiray, Yeliz; Yücel, Onuralp
2012-06-01
In this study crown magnesium produced from Turkish calcined dolomite by the Pigeon Process was refined and corrosion tests were applied. By using factsage thermodynamic program metalothermic reduction behavior of magnesium oxide and silicate formation structure during this reaction were investigated. After thermodynamic studies were completed, calcination of dolomite and it's metalothermic reduction at temperatures of 1473 K, 1523 K and within a vacuum (varied from 20 to 200 Pa) and refining of crown magnesium was studied. Different flux compositions consisting of MgCl2, KCl, CaCl2, MgO, CaF2, NaCl, and SiO2 with and without B2O3 additions were selected for the refining process. These tests were carried out at 963 K for 15, 30 and 45 minutes setting time. Considerable amount of iron was transferred into the sludge phase and its amount decreased from 0.08% to 0.027%. This refined magnesium was suitable for the production of various magnesium alloys. As a result of decreasing iron content, minimum corrosion rate of refined magnesium was obtained 2.35 g/m2/day. The results are compared with previous studies.
Adaptive mesh refinement for shocks and material interfaces
Dai, William Wenlong
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Refined food addiction: a classic substance use disorder.
Ifland, J R; Preuss, H G; Marcus, M T; Rourke, K M; Taylor, W C; Burau, K; Jacobs, W S; Kadish, W; Manso, G
2009-05-01
Overeating in industrial societies is a significant problem, linked to an increasing incidence of overweight and obesity, and the resultant adverse health consequences. We advance the hypothesis that a possible explanation for overeating is that processed foods with high concentrations of sugar and other refined sweeteners, refined carbohydrates, fat, salt, and caffeine are addictive substances. Therefore, many people lose control over their ability to regulate their consumption of such foods. The loss of control over these foods could account for the global epidemic of obesity and other metabolic disorders. We assert that overeating can be described as an addiction to refined foods that conforms to the DSM-IV criteria for substance use disorders. To examine the hypothesis, we relied on experience with self-identified refined foods addicts, as well as critical reading of the literature on obesity, eating behavior, and drug addiction. Reports by self-identified food addicts illustrate behaviors that conform to the 7 DSM-IV criteria for substance use disorders. The literature also supports use of the DSM-IV criteria to describe overeating as a substance use disorder. The observational and empirical data strengthen the hypothesis that certain refined food consumption behaviors meet the criteria for substance use disorders, not unlike tobacco and alcohol. This hypothesis could lead to a new diagnostic category, as well as therapeutic approaches to changing overeating behaviors.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulation due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.
Molecular dynamics force-field refinement against quasi-elastic neutron scattering data
Borreguero Calvo, Jose M.; Lynch, Vickie E.
2015-11-23
Quasi-elastic neutron scattering (QENS) is one of the experimental techniques of choice for probing the dynamics at length and time scales that are also in the realm of full-atom molecular dynamics (MD) simulations. This overlap enables extension of current fitting methods that use time-independent equilibrium measurements to new methods fitting against dynamics data. We present an algorithm that fits simulation-derived incoherent dynamical structure factors against QENS data probing the diffusive dynamics of the system. We showcase the difficulties inherent to this type of fitting problem, namely, the disparity between simulation and experiment environment, as well as limitations in the simulationmore » due to incomplete sampling of phase space. We discuss a methodology to overcome these difficulties and apply it to a set of full-atom MD simulations for the purpose of refining the force-field parameter governing the activation energy of methyl rotation in the octa-methyl polyhedral oligomeric silsesquioxane molecule. Our optimal simulated activation energy agrees with the experimentally derived value up to a 5% difference, well within experimental error. We believe the method will find applicability to other types of diffusive motions and other representation of the systems such as coarse-grain models where empirical fitting is essential. In addition, the refinement method can be extended to the coherent dynamic structure factor with no additional effort.« less
The changing face of U. S. refining: Ominous notes
Not Available
1992-01-31
As environmental protection comes of age in the US, a complex series of structural changes is also expected - in enforcement bureaucracy, manufacturing, and in energy consumption. It is already quite obvious in the petroleum refining industry. A side effect may be the export of jobs. Buyouts and closures are expected, as is increased refined product import dependency. This issue updates expected changes in gasoline and distillate product requirements in the US, and reports some ominous statements from some of the oil industry's affected parties. This issue also presented the following: (1) the ED Refining Netback Data Series for the US Gulf and West Coasts, Rotterdam, and Singapore as of Jan. 24, 1992; and (2) the ED Fuel Price Tax Series for countries of the Eastern Hemisphere, Jan. 1992 edition.
Survey shows over 1,000 refining catalysts
Rhodes, A.K.
1991-10-14
The Journal's latest survey of worldwide refining catalysts reveals that there are more than 1,040 unique catalyst designations in commercial use in 19 processing categories - an increase of some 140 since the compilation of refining catalysts was last published. As a matter of interest, some 700 catalysts were determined during the first survey. The processing categories surveyed in this paper are: Catalytic naphtha reforming. Dimerization, Isomerization (C{sub 4}), Isomerization (C{sub 5} and C{sub 6}), Isomerization (xylenes), Fluid catalytic cracking (FCC), Hydrocracking, Mild hydrocracking, hydrotreating/hydrogenation/ saturation, Hydrorefining, Polymerization, Sulfur (elemental) recovery, Steam hydrocarbon reforming, Sweetening, Clause unit tail gas treatment, Oxygenates, Combustion promoters (FCC), Sulfur oxides reduction (FCC), and Other refining processes.
Steam refining as an alternative to steam explosion.
Schütt, Fokko; Westereng, Bjørge; Horn, Svein J; Puls, Jürgen; Saake, Bodo
2012-05-01
In steam pretreatment the defibration is usually achieved by an explosion at the end of the treatment, but can also be carried out in a subsequent refiner step. A steam explosion and a steam refining unit were compared by using the same raw material and pretreatment conditions, i.e. temperature and time. Smaller particle size was needed for the steam explosion unit to obtain homogenous slurries without considerable amounts of solid chips. A higher amount of volatiles could be condensed from the vapour phase after steam refining. The results from enzymatic hydrolysis showed no significant differences. It could be shown that, beside the chemical changes in the cell wall, the decrease of the particle size is the decisive factor to enhance the enzymatic accessibility while the explosion effect is not required.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
The US petroleum refining industry in the 1980's
Not Available
1990-10-11
As part of the EIA program on petroleum, The US Petroleum Refining Industry in the 1980's, presents a historical analysis of the changes that took place in the US petroleum refining industry during the 1980's. It is intended to be of interest to analysts in the petroleum industry, state and federal government officials, Congress, and the general public. The report consists of six chapters and four appendices. Included is a detailed description of the major events and factors that affected the domestic refining industry during this period. Some of the changes that took place in the 1980's are the result of events that started in the 1970's. The impact of these events on US refinery configuration, operations, economics, and company ownership are examined. 23 figs., 11 tabs.
Segregation Coefficients of Impurities in Selenium by Zone Refining
NASA Technical Reports Server (NTRS)
Su, Ching-Hua; Sha, Yi-Gao
1998-01-01
The purification of Se by zone refining process was studied. The impurity solute levels along the length of a zone-refined Se sample were measured by spark source mass spectrographic analysis. By comparing the experimental concentration levels with theoretical curves the segregation coefficient, defined as the ratio of equilibrium concentration of a given solute in the solid to that in the liquid, k = x(sub s)/x(sub l) for most of the impurities in Se are found to be close to unity, i.e., between 0.85 and 1.15, with the k value for Si, Zn, Fe, Na and Al greater than 1 and that for S, Cl, Ca, P, As, Mn and Cr less than 1. This implies that a large number of passes is needed for the successful implementation of zone refining in the purification of Se.
Fu, Zheng; Li, Xue; Miao, Yipu; Merz, Kenneth M
2013-12-03
The recognition and association of donepezil with acetylcholinesterase (AChE) has been extensively studied in the past several decades because of the former's use as a palliative treatment for mild Alzheimer disease. Herein we examine the conformational properties of donepezil and we re-examine the donepezil-AChE crystal structure using combined quantum mechanical/molecular mechanical (QM/MM) X-ray refinement tools. Donepezil's conformational energy surface was explored using the M06 suite of density functionals and with the MP2/complete basis set (CBS) method using the aug-cc-pVXZ (X = D and T) basis sets. The donepezil-AChE complex (PDB 1EVE) was also re-refined through a parallel QM/MM X-ray refinement approach based on an in-house ab initio code QUICK, which uses the message passing interface (MPI) in a distributed SCF algorithm to accelerate the calculation via parallelization. In the QM/MM re-refined donepezil structure, coordinate errors that previously existed in the PDB deposited geometry were improved leading to an improvement of the modeling of the interaction between donepezil and the aromatic side chains located in the AChE active site gorge. As a result of the re-refinement there was a 93% reduction in the donepezil conformational strain energy versus the original PDB structure. The results of the present effort offer further detailed structural and biochemical inhibitor-AChE information for the continued development of more effective and palliative treatments of Alzheimer disease.
NASA Astrophysics Data System (ADS)
Chilton, Sven; Colella, Phillip
2010-11-01
Adaptive mesh refinement (AMR) is an efficient technique for solving systems of partial differential equations numerically. The underlying algorithm determines where and when a base spatial and temporal grid must be resolved further in order to achieve the desired precision and accuracy in the numerical solution. However, propagating wave solutions prove problematic for AMR. In systems with low degrees of dissipation (e.g. the Maxwell-Vlasov system) a wave traveling from a finely resolved region into a coarsely resolved region encounters a numerical impedance mismatch, resulting in spurious reflections off of the coarse-fine grid boundary. These reflected waves then become trapped inside the fine region. Here, we present a scheme for damping these spurious reflections. We demonstrate its application to the scalar wave equation and an implementation for Maxwell's Equations. We also discuss a possible extension to the Maxwell-Vlasov system.
Pillowing doublets: Refining a mesh to ensure that faces share at most one edge
Mitchell, S.A.; Tautges, T.J.
1995-11-01
Occasionally one may be confronted by a hexahedral or quadrilateral mesh containing doublets, two faces sharing two edges. In this case, no amount of smoothing will produce a mesh with agreeable element quality: in the planar case, one of these two faces will always have an angle of at least 180 degrees between the two edges. The authors describe a robust scheme for refining a hexahedral or quadrilateral mesh to separate such faces, so that any two faces share at most one edge. Note that this also ensures that two hexahedra share at most one face in the three dimensional case. The authors have implemented this algorithm and incorporated it into the CUBIT mesh generation environment developed at Sandia National Laboratories.
Kimura, S Roy; Tebben, Andrew J; Langley, David R
2008-06-01
Homology modeling of G protein-coupled receptors is becoming a widely used tool in drug discovery. However, unrefined models built using the bovine rhodopsin crystal structure as the template, often have binding sites that are too small to accommodate known ligands. Here, we present a novel systematic method to refine model active sites based on a pressure-guided molecular dynamics simulation. A distinct advantage of this approach is the ability to introduce systematic perturbations in model backbone atoms in addition to side chain adjustments. The method is validated on two test cases: (1) docking of retinal into an MD-relaxed structure of opsin and (2) docking of known ligands into a homology model of the CCR2 receptor. In both cases, we show that the MD expansion algorithm makes it possible to dock the ligands in poses that agree with the crystal structure or mutagenesis data.
Trajectory refinement of three-body orbits in the real solar system model
NASA Astrophysics Data System (ADS)
Dei Tos, Diogene A.; Topputo, Francesco
2017-04-01
In this paper, an automatic algorithm for the correction of orbits in the real solar system model is described. The differential equations governing the dynamics of a massless particle in the n-body problem are written as perturbation of the circular restricted three-body problem in a non-uniformly rotating, pulsating frame by using a Lagrangian formalism. The refinement is carried out by means of a modified multiple shooting technique, and the problem is solved for a finite number of trajectory states at several time instants. The analysis involves computing the dynamical substitutes of the collinear points, as well as several Lagrange point orbits, for the Sun-Earth, Sun-Jupiter, and Earth-Moon gravitational systems.
Grid-refinement study of hypersonic laminar flow over a 2-D ramp
NASA Technical Reports Server (NTRS)
Thomas, James L.; Rudy, David H.; Kumar, Ajay; Van Leer, Bram
1991-01-01
Computations were made for those test cases of Problem 3 which were designated as laminar flows, viz., test cases 3.1, 3.2, 3.4, and 3.5. These test cases corresponded to flows over a flat plate and a compression ramp at high Mach number and at high Reynolds number. The computations over the compression ramps indicate a substantial streamwise extent of separation. Based on previous experience with separated laminar flows at high Mach numbers which indicated a substantial effect with spatial grid refinement, a series of computations with different grid sizes were performed. Also, for the flat plate, comparisons of the results for two different algorithms were made.
Generalization and refinement of an automatic landing system capable of curved trajectories
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1976-01-01
Refinements in the lateral and longitudinal guidance for an automatic landing system capable of curved trajectories were studied. Wing flaps or drag flaps (speed brakes) were found to provide faster and more precise speed control than autothrottles. In the case of the lateral control it is shown that the use of the integral of the roll error in the roll command over the first 30 to 40 seconds of flight reduces the sensitivity of the lateral guidance to the gain on the azimuth guidance angle error in the roll command. Also, changes to the guidance algorithm are given that permit pi-radian approaches and constrain the airplane to fly in a specified plane defined by the position of the airplane at the start of letdown and the flare point.
MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo
2010-11-01
A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.
Refinable C(1) spline elements for irregular quad layout.
Nguyen, Thien; Peters, Jörg
2016-03-01
Building on a result of U. Reif on removable singularities, we construct C(1) bi-3 splines that may include irregular points where less or more than four tensor-product patches meet. The resulting space complements PHT splines, is refinable and the refined spaces are nested, preserving for example surfaces constructed from the splines. As in the regular case, each quadrilateral has four degrees of freedom, each associated with one spline and the splines are linearly independent. Examples of use for surface construction and isogeometric analysis are provided.
Refined Estimation Of Thermal Tensile Stresses In Bolts
NASA Technical Reports Server (NTRS)
Rash, Larry C.
1994-01-01
Thermal changes in tensile stresses and strains in bolt and in corresponding compressive stresses and strains in bolted material estimated more accurately by use of equations incorporating two refinements over previous equations. Elasticity of bolted material and radial thermal expansion also taken into account. Refined equations improve design and analysis of bolted joints assembled at one temperature (e.g., room temperature) and in which specified minimum tension must be maintained (and/or specified maximum tension not exceeded) at higher or lower operational temperature.
19 CFR 19.21 - Smelting and refining in separate establishments.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 1 2011-04-01 2011-04-01 false Smelting and refining in separate establishments... THEREIN Smelting and Refining Warehouses § 19.21 Smelting and refining in separate establishments. (a) If the operations of smelting and refining are not carried on in the same establishment, the smelted...
19 CFR 19.18 - Smelting and refining; allowance for wastage; withdrawal for consumption.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 19 Customs Duties 1 2011-04-01 2011-04-01 false Smelting and refining; allowance for wastage... OF MERCHANDISE THEREIN Smelting and Refining Warehouses § 19.18 Smelting and refining; allowance for... dutiable metals entirely lost in smelting or refining, or both), shall constitute the quantity of...
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Code of Federal Regulations, 2010 CFR
2010-07-01
... FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA... approved small refiner status who acquires a refinery from a refiner with approved status as a motor... May 31, 2010 for a refinery acquired from a motor vehicle diesel fuel small refiner or beyond...
Efficient Homotopy Continuation Algorithms with Application to Computational Fluid Dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.
New homotopy continuation algorithms are developed and applied to a parallel implicit finite-difference Newton-Krylov-Schur external aerodynamic flow solver for the compressible Euler, Navier-Stokes, and Reynolds-averaged Navier-Stokes equations with the Spalart-Allmaras one-equation turbulence model. Many new analysis tools, calculations, and numerical algorithms are presented for the study and design of efficient and robust homotopy continuation algorithms applicable to solving very large and sparse nonlinear systems of equations. Several specific homotopies are presented and studied and a methodology is presented for assessing the suitability of specific homotopies for homotopy continuation. . A new class of homotopy continuation algorithms, referred to as monolithic homotopy continuation algorithms, is developed. These algorithms differ from classical predictor-corrector algorithms by combining the predictor and corrector stages into a single update, significantly reducing the amount of computation and avoiding wasted computational effort resulting from over-solving in the corrector phase. The new algorithms are also simpler from a user perspective, with fewer input parameters, which also improves the user's ability to choose effective parameters on the first flow solve attempt. Conditional convergence is proved analytically and studied numerically for the new algorithms. The performance of a fully-implicit monolithic homotopy continuation algorithm is evaluated for several inviscid, laminar, and turbulent flows over NACA 0012 airfoils and ONERA M6 wings. The monolithic algorithm is demonstrated to be more efficient than the predictor-corrector algorithm for all applications investigated. It is also demonstrated to be more efficient than the widely-used pseudo-transient continuation algorithm for all inviscid and laminar cases investigated, and good performance scaling with grid refinement is demonstrated for the inviscid cases. Performance is also demonstrated
Friends-of-friends galaxy group finder with membership refinement. Application to the local Universe
NASA Astrophysics Data System (ADS)
Tempel, E.; Kipper, R.; Tamm, A.; Gramann, M.; Einasto, M.; Sepp, T.; Tuvikene, T.
2016-04-01
Context. Groups form the most abundant class of galaxy systems. They act as the principal drivers of galaxy evolution and can be used as tracers of the large-scale structure and the underlying cosmology. However, the detection of galaxy groups from galaxy redshift survey data is hampered by several observational limitations. Aims: We improve the widely used friends-of-friends (FoF) group finding algorithm with membership refinement procedures and apply the method to a combined dataset of galaxies in the local Universe. A major aim of the refinement is to detect subgroups within the FoF groups, enabling a more reliable suppression of the fingers-of-God effect. Methods: The FoF algorithm is often suspected of leaving subsystems of groups and clusters undetected. We used a galaxy sample built of the 2MRS, CF2, and 2M++ survey data comprising nearly 80 000 galaxies within the local volume of 430 Mpc radius to detect FoF groups. We conducted a multimodality check on the detected groups in search for subgroups. We furthermore refined group membership using the group virial radius and escape velocity to expose unbound galaxies. We used the virial theorem to estimate group masses. Results: The analysis results in a catalogue of 6282 galaxy groups in the 2MRS sample with two or more members, together with their mass estimates. About half of the initial FoF groups with ten or more members were split into smaller systems with the multimodality check. An interesting comparison to our detected groups is provided by another group catalogue that is based on similar data but a completely different methodology. Two thirds of the groups are identical or very similar. Differences mostly concern the smallest and largest of these other groups, the former sometimes missing and the latter being divided into subsystems in our catalogue. The catalogues are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc
AMR++: Object-Oriented Parallel Adaptive Mesh Refinement
Quinlan, D.; Philip, B.
2000-02-02
Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.
Advances in multi-domain lattice Boltzmann grid refinement
NASA Astrophysics Data System (ADS)
Lagrava, D.; Malaspinas, O.; Latt, J.; Chopard, B.
2012-05-01
Grid refinement has been addressed by different authors in the lattice Boltzmann method community. The information communication and reconstruction on grid transitions is of crucial importance from the accuracy and numerical stability point of view. While a decimation is performed when going from the fine to the coarse grid, a reconstruction must performed to pass form the coarse to the fine grid. In this context, we introduce a decimation technique for the copy from the fine to the coarse grid based on a filtering operation. We show this operation to be extremely important, because a simple copy of the information is not sufficient to guarantee the stability of the numerical scheme at high Reynolds numbers. Then we demonstrate that to reconstruct the information, a local cubic interpolation scheme is mandatory in order to get a precision compatible with the order of accuracy of the lattice Boltzmann method. These two fundamental extra-steps are validated on two classical 2D benchmarks, the 2D circular cylinder and the 2D dipole-wall collision. The latter is especially challenging from the numerical point of view since we allow strong gradients to cross the refinement interfaces at a relatively high Reynolds number of 5000. A very good agreement is found between the single grid and the refined grid cases. The proposed grid refinement strategy has been implemented in the parallel open-source library Palabos.
Lactation and neonatal nutrition: Defining and refining the critical questions
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper resulted from a conference entitled "Lactation and Milk: Defining and Refining the Critical Questions" held at the University of Colorado School of Medicine from January 18-20, 2012. The mission of the conference was to identify unresolved questions and set future goals for research into ...
Process for electroslag refining of uranium and uranium alloys
Lewis, P.S. Jr.; Agee, W.A.; Bullock, J.S. IV; Condon, J.B.
1975-07-22
A process is described for electroslag refining of uranium and uranium alloys wherein molten uranium and uranium alloys are melted in a molten layer of a fluoride slag containing up to about 8 weight percent calcium metal. The calcium metal reduces oxides in the uranium and uranium alloys to provide them with an oxygen content of less than 100 parts per million. (auth)
Some refinements of the theory of the viscous screw pump.
NASA Technical Reports Server (NTRS)
Elrod, H. G.
1972-01-01
Recently performed analysis for herringbone thrust bearings has been incorporated into the theory of the viscous screw pump for Newtonian fluids. In addition, certain earlier corrections for sidewall and channel curvature effects have been simplified. The result is a single, refined formula for the prediction of the pressure-flow relation for these pumps.
Refining King and Baxter Magolda's Model of Intercultural Maturity
ERIC Educational Resources Information Center
Perez, Rosemary J.; Shim, Woojeong; King, Patricia M.; Baxter Magolda, Marcia B.
2015-01-01
This study examined 110 intercultural experiences from 82 students attending six colleges and universities to explore how students' interpretations of their intercultural experiences reflected their developmental capacities for intercultural maturity. Our analysis of students' experiences confirmed as well as refined and expanded King and Baxter…
Crisis and Survival in Western European Oil Refining.
ERIC Educational Resources Information Center
Pinder, David A.
1986-01-01
In recent years, oil refining in Western Europe has experienced a period of intense contraction. Discussed are the nature of the crisis, defensive strategies that have been adopted, the spatial consequences of the strategies, and how effective they have been in combatting the root causes of crises. (RM)
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Comprehensive Data Collected from the Petroleum Refining Sector
On April 1, 2011 EPA sent a comprehensive industry-wide information collection request (ICR) to all facilities in the U.S. petroleum refining industry. EPA has received this ICR data and compiled these data into databases and spreadsheets for the web
Energy Efficiency Improvement in the Petroleum RefiningIndustry
Worrell, Ernst; Galitsky, Christina
2005-05-01
Information has proven to be an important barrier inindustrial energy efficiency improvement. Voluntary government programsaim to assist industry to improve energy efficiency by supplyinginformation on opportunities. ENERGY STAR(R) supports the development ofstrong strategic corporate energy management programs, by providingenergy management information tools and strategies. This paper summarizesENERGY STAR research conducted to develop an Energy Guide for thePetroleum Refining industry. Petroleum refining in the United States isthe largest in the world, providing inputs to virtually every economicsector, including the transport sector and the chemical industry.Refineries spend typically 50 percent of the cash operating costs (e.g.,excluding capital costs and depreciation) on energy, making energy amajor cost factor and also an important opportunity for cost reduction.The petroleum refining industry consumes about 3.1 Quads of primaryenergy, making it the single largest industrial energy user in the UnitedStates. Typically, refineries can economically improve energy efficiencyby 20 percent. The findings suggest that given available resources andtechnology, there are substantial opportunities to reduce energyconsumption cost-effectively in the petroleum refining industry whilemaintaining the quality of the products manufactured.
Shakeout gathers momentum in Europe`s refining sector
Knott, D.
1996-03-25
This paper reviews the decline and restructuring of the petroleum refining industry in Europe which is facing increased competition from foreign operators and more stringent environmental compliance laws. The excess production capacity has forced mergers between companies and consolidation of plants. The paper reviews the production and capacity of each of the major European petroleum producing countries.
Evaluating and Refining High Throughput Tools for Toxicokinetics
This poster summarizes efforts of the Chemical Safety for Sustainability's Rapid Exposure and Dosimetry (RED) team to facilitate the development and refinement of toxicokinetics (TK) tools to be used in conjunction with the high throughput toxicity testing data generated as a par...
Optimization of Melt Treatment for Austenitic Steel Grain Refinement
NASA Astrophysics Data System (ADS)
Lekakh, Simon N.; Ge, Jun; Richards, Von; O'Malley, Ron; TerBush, Jessica R.
2017-02-01
Refinement of the as-cast grain structure of austenitic steels requires the presence of active solid nuclei during solidification. These nuclei can be formed in situ in the liquid alloy by promoting reactions between transition metals (Ti, Zr, Nb, and Hf) and metalloid elements (C, S, O, and N) dissolved in the melt. Using thermodynamic simulations, experiments were designed to evaluate the effectiveness of a predicted sequence of reactions targeted to form precipitates that could act as active nuclei for grain refinement in austenitic steel castings. Melt additions performed to promote the sequential precipitation of titanium nitride (TiN) onto previously formed spinel (Al2MgO4) inclusions in the melt resulted in a significant refinement of the as-cast grain structure in heavy section Cr-Ni-Mo stainless steel castings. A refined as-cast structure consisting of an inner fine-equiaxed grain structure and outer columnar dendrite zone structure of limited length was achieved in experimental castings. The sequential of precipitation of TiN onto Al2MgO4 was confirmed using automated SEM/EDX and TEM analyses.
Refinement and Selection of Near-native Protein Structures
NASA Astrophysics Data System (ADS)
Zhang, Jiong; Zhang, Jingfen; Shang, Yi; Xu, Dong; Kosztin, Ioan
2013-03-01
In recent years in silico protein structure prediction reached a level where a variety of servers can generate large pools of near-native structures. However, the identification and further refinement of the best structures from the pool of decoys continue to be problematic. To address these issues, we have developed a selective refinement protocol (based on the Rosetta software package), and a molecular dynamics (MD) simulation based ranking method (MDR). The refinement of the selected structures is done by employing Rosetta's relax mode, subject to certain constraints. The selection of the final best models is done with MDR by testing their relative stability against gradual heating during all atom MD simulations. We have implemented the selective refinement protocol and the MDR method in our fully automated server Mufold-MD. Assessments of the performance of the Mufold-MD server in the CASP10 competition and other tests will be presented. This work was supported by grants from NIH. Computer time was provided by the University of Missouri Bioinformatics Consortium.
Assimilating Remote Ammonia Observations with a Refined Aerosol Thermodynamics Adjoint"
Ammonia emissions parameters in North America can be refined in order to improve the evaluation of modeled concentrations against observations. Here, we seek to do so by developing and applying the GEOS-Chem adjoint nested over North America to conductassimilation of observations...
Properties of Canadian re-refined base oils
Strigner, P.L.
1980-11-01
The Fuels and Lubricants Laboratory of NRC (Canada) has been examining for over 10 years, as a service, the properties of base stocks made by Canadian re-refiners. Nineteen samples of acid/clay processed base stocks from six Canadian re-refiners were examined. When well re-refined, the base stocks have excellent properties including a good response to anti-oxidants and a high degree of cleanliness. Since traces of additives and/or polar compounds do remain, the quality of the base stocks is judged to be slightly inferior to that of comparable virgin refined base stocks. Some suggested specification limits for various properties and some indication of batch-to-batch consistency were obtained. Any usage of the limits should be done with caution, e.g., sulfur, bearing in mind the rapidly changing crude oil picture and engine and machine technology leading to oil products of differing compositions. Certainly modifications are in order; it may even be desirable to have grades of base stocks.
Refinement of a Chemistry Attitude Measure for College Students
ERIC Educational Resources Information Center
Xu, Xiaoying; Lewis, Jennifer E.
2011-01-01
This work presents the evaluation and refinement of a chemistry attitude measure, Attitude toward the Subject of Chemistry Inventory (ASCI), for college students. The original 20-item and revised 8-item versions of ASCI (V1 and V2) were administered to different samples. The evaluation for ASCI had two main foci: reliability and validity. This…
Nucleation mechanisms of refined alpha microstructure in beta titanium alloys
NASA Astrophysics Data System (ADS)
Zheng, Yufeng
Due to a great combination of physical and mechanical properties, beta titanium alloys have become promising candidates in the field of chemical industry, aerospace and biomedical materials. The microstructure of beta titanium alloys is the governing factor that determines their properties and performances, especially the size scale, distribution and volume fraction of precipitate phase in parent phase matrix. Therefore in order to enhance the performance of beta titanium alloys, it is critical to obtain a thorough understanding of microstructural evolution in beta titanium alloys upon various thermal and/or mechanical processes. The present work is focusing on the study of nucleation mechanisms of refined alpha microstructure and super-refined alpha microstructure in beta titanium alloys in order to study the influence of instabilities within parent phase matrix on precipitates nucleation, including compositional instabilities and/or structural instabilities. The current study is primarily conducted in Ti-5Al-5Mo-5V-3Cr (wt%, Ti-5553), a commercial material for aerospace application. Refined and super-refined precipitates microstructure in Ti-5553 are obtained under specific accurate temperature controlled heat treatments. The characteristics of either microstructure are investigated in details using various characterization techniques, such as SEM, TEM, STEM, HRSTEM and 3D atom probe to describe the features of microstructure in the aspect of morphology, distribution, structure and composition. Nucleation mechanisms of refined and super-refined precipitates are proposed in order to fully explain the features of different precipitates microstructure in Ti-5553. The necessary thermodynamic conditions and detailed process of phase transformations are introduced. In order to verify the reliability of proposed nucleation mechanisms, thermodynamic calculation and phase field modeling simulation are accomplished using the database of simple binary Ti-Mo system
Procedures and computer programs for telescopic mesh refinement using MODFLOW
Leake, Stanley A.; Claar, David V.
1999-01-01
Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.
Hirshfeld atom refinement for modelling strong hydrogen bonds.
Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon
2014-09-01
High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.
Decontamination of transuranic waste metal by melt refining
Heshmatpour, B.; Copeland, G.L.; Heestand, R.L.
1981-12-01
Melt refining of transuraniuc- (TRU-) contaminated metals has been proposed as a decontamination process with the potential advantages of reclaiming metal and simplifying analytical problems. The feasibility of routinely achieving the 10 nCi/g (approx. 0.1 ppM) decontamination level by melt refining will demonstrate the removing of scrap metal from the TRU waste classification. To demonstrate this feasibility, mild steel, stainless steel, nickel, and copper were contaminated with 500 ppM PuO/sub 2/ and melted with various fluxes. Four different fluxes, borosilicate glass, blast furnace slag, high silica slag, and artificial basalt, were used in these studies. The solidified slags and metals were analyzed for their plutonium contents by the use of a combination of wet chemical and ..cap alpha..-activity counting technique. Partition ratios were calculated for plutonium using the analytical results of each experiment. Some metals were doubled refined to study the effect of secondary slag treatment. The initial weight of the slags was also varied to investigate its effect on plutonium removal. The results indicated that the use of proper slags is necessary for effective removal of plutonium. All four slags were effective in removing plutonium from the metals. Values of less than 1 ppM Pu (approx. 100 nCi/g) could be obtained in all cases. The double-refined samples were cleaned to less than 0.1 ppM Pu (approx. nCi/g), which is the goal. Variation in the slag weight did not change the results significantly. Double refining of the metal with small primary and secondary slag volume can be an effective process for removal of TRU contaminants from metals.
Parallel Clustering Algorithms for Structured AMR
Gunney, B T; Wissink, A M; Hysom, D A
2005-10-26
We compare several different parallel implementation approaches for the clustering operations performed during adaptive gridding operations in patch-based structured adaptive mesh refinement (SAMR) applications. Specifically, we target the clustering algorithm of Berger and Rigoutsos (BR91), which is commonly used in many SAMR applications. The baseline for comparison is a simplistic parallel extension of the original algorithm that works well for up to O(10{sup 2}) processors. Our goal is a clustering algorithm for machines of up to O(10{sup 5}) processors, such as the 64K-processor IBM BlueGene/Light system. We first present an algorithm that avoids the unneeded communications of the simplistic approach to improve the clustering speed by up to an order of magnitude. We then present a new task-parallel implementation to further reduce communication wait time, adding another order of magnitude of improvement. The new algorithms also exhibit more favorable scaling behavior for our test problems. Performance is evaluated on a number of large scale parallel computer systems, including a 16K-processor BlueGene/Light system.
NASA Astrophysics Data System (ADS)
Gereben, Orsolya; Pusztai, László
2011-08-01
The invariant environment refinement technique, as applied to reverse Monte Carlo modelling [invariant environment refinement technique + reverse Monte Carlo (INVERT + RMC); M. J. Cliffe, M. T. Dove, D. A. Drabold, and A. L. Goodwin, Phys. Rev. Lett. 104, 125501 (2010), 10.1103/PhysRevLett.104.125501], is extended so that it is now applicable for interpreting the structure factor (instead of the pair distribution function). The new algorithm, called the local invariance calculation, is presented by the examples of amorphous silicon, phosphorus, and liquid argon. As a measure of the effectiveness of the new algorithm, the ratio of exactly fourfold coordinated Si atoms was larger than obtained previously by the INVERT-RMC scheme.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Automated segmentation refinement of small lung nodules in CT scans by local shape analysis.
Diciotti, Stefano; Lombardo, Simone; Falchini, Massimo; Picozzi, Giulia; Mascalchi, Mario
2011-12-01
One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.
Refinement, Validation and Application of Cloud-Radiation Parameterization in a GCM
Dr. Graeme L. Stephens
2009-04-30
The research performed under this award was conducted along 3 related fronts: (1) Refinement and assessment of parameterizations of sub-grid scale radiative transport in GCMs. (2) Diagnostic studies that use ARM observations of clouds and convection in an effort to understand the effects of moist convection on its environment, including how convection influences clouds and radiation. This aspect focuses on developing and testing methodologies designed to use ARM data more effectively for use in atmospheric models, both at the cloud resolving model scale and the global climate model scale. (3) Use (1) and (2) in combination with both models and observations of varying complexity to study key radiation feedback Our work toward these objectives thus involved three corresponding efforts. First, novel diagnostic techniques were developed and applied to ARM observations to understand and characterize the effects of moist convection on the dynamical and thermodynamical environment in which it occurs. Second, an in house GCM radiative transfer algorithm (BUGSrad) was employed along with an optimal estimation cloud retrieval algorithm to evaluate the ability to reproduce cloudy-sky radiative flux observations. Assessments using a range of GCMs with various moist convective parameterizations to evaluate the fidelity with which the parameterizations reproduce key observable features of the environment were also started in the final year of this award. The third study area involved the study of cloud radiation feedbacks and we examined these in both cloud resolving and global climate models.
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
Basic effects of pulp refining on fiber properties--a review.
Gharehkhani, Samira; Sadeghinezhad, Emad; Kazi, Salim Newaz; Yarmand, Hooman; Badarudin, Ahmad; Safaei, Mohammad Reza; Zubir, Mohd Nashrul Mohd
2015-01-22
The requirement for high quality pulps which are widely used in paper industries has increased the demand for pulp refining (beating) process. Pulp refining is a promising approach to improve the pulp quality by changing the fiber characteristics. The diversity of research on the effect of refining on fiber properties which is due to the different pulp sources, pulp consistency and refining equipment has interested us to provide a review on the studies over the last decade. In this article, the influence of pulp refining on structural properties i.e., fibrillations, fine formation, fiber length, fiber curl, crystallinity and distribution of surface chemical compositions is reviewed. The effect of pulp refining on electrokinetic properties of fiber e.g., surface and total charges of pulps is discussed. In addition, an overview of different refining theories, refiners as well as some tests for assessing the pulp refining is presented.
NASA Astrophysics Data System (ADS)
Hassane, Mamadou Maina F. Z.; Ackerer, P.
2017-02-01
In the context of parameter identification by inverse methods, an optimized adaptive downscaling parameterization is described in this work. The adaptive downscaling parameterization consists of (i) defining a parameter mesh for each parameter, independent of the flow model mesh, (ii) optimizing the parameters set related to the parameter mesh, and (iii) if the match between observed and computed heads is not accurate enough, creating a new parameter mesh via refinement (downscaling) and performing a new optimization of the parameters. Refinement and coarsening indicators are defined to optimize the parameter mesh refinement. The robustness of the refinement and coarsening indicators was tested by comparing the results of inversions using refinement without indicators, refinement with only refinement indicators and refinement with coarsening and refinement indicators. These examples showed that the indicators significantly reduce the number of degrees of freedom necessary to solve the inverse problem without a loss of accuracy. They, therefore, limit over-parameterization.
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Venezuela's stake in US refining may grow: xenophobia addressed
Not Available
1987-09-23
Is this an invasion of U.S. oil industry sovereignty, or a happy marriage of upstream and downstream between US and foreign interests. Venezuela, a founding member of the Organization of Petroleum Exporting Countries who has also been a chief supplier to the US during times of peace and war, now owns half of two important US refining and marketing organizations. Many US marketers have felt uneasy about this foreign penetration of their turf. In this issue, for the sake of public information, the entire policy statement from the leader of that Venezuelan market strategy is provided. This issue also contains the following: (1) ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore as of late September, 1987; and (2) ED fuel price/tax series for countries of the Eastern Hemisphere, Sept. 19 edition. 4 figures, 6 tables.
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Post-refinement multiscale method for pin power reconstruction
Collins, B.; Seker, V.; Downar, T.; Xu, Y.
2012-07-01
The ability to accurately predict local pin powers in nuclear reactors is necessary to understand the mechanisms that cause fuel pin failure during steady state and transient operation. In the research presented here, methods are developed to improve the local solution using high order methods with boundary conditions from a low order global solution. Several different core configurations were tested to determine the improvement in the local pin powers compared to the standard techniques based on diffusion theory and pin power reconstruction (PPR). The post-refinement multiscale methods use the global solution to determine boundary conditions for the local solution. The local solution is solved using either a fixed boundary source or an albedo boundary condition; this solution is 'post-refinement' and thus has no impact on the global solution. (authors)
Segmental Refinement: A Multigrid Technique for Data Locality
Adams, Mark
2014-10-27
We investigate a technique - segmental refinement (SR) - proposed by Brandt in the 1970s as a low memory multigrid method. The technique is attractive for modern computer architectures because it provides high data locality, minimizes network communication, is amenable to loop fusion, and is naturally highly parallel and asynchronous. The network communication minimization property was recognized by Brandt and Diskin in 1994; we continue this work by developing a segmental refinement method for a finite volume discretization of the 3D Laplacian on massively parallel computers. An understanding of the asymptotic complexities, required to maintain textbook multigrid efficiency, are explored experimentally with a simple SR method. A two-level memory model is developed to compare the asymptotic communication complexity of a proposed SR method with traditional parallel multigrid. Performance and scalability are evaluated with a Cray XC30 with up to 64K cores. We achieve modest improvement in scalability from traditional parallel multigrid with a simple SR implementation.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Measuring coalition functioning: refining constructs through factor analysis.
Brown, Louis D; Feinberg, Mark E; Greenberg, Mark T
2012-08-01
Internal and external coalition functioning is an important predictor of coalition success that has been linked to perceived coalition effectiveness, coalition goal achievement, coalition ability to support evidence-based programs, and coalition sustainability. Understanding which aspects of coalition functioning best predict coalition success requires the development of valid measures of empirically unique coalition functioning constructs. The goal of the present study is to examine and refine the psychometric properties of coalition functioning constructs in the following six domains: leadership, interpersonal relationships, task focus, participation benefits/costs, sustainability planning, and community support. The authors used factor analysis to identify problematic items in our original measure and then piloted new items and scales to create a more robust, psychometrically sound, multidimensional measure of coalition functioning. Scales displayed good construct validity through correlations with other measures. Discussion considers the strengths and weaknesses of the refined instrument.
Contactless heater floating zone refining and crystal growth
NASA Technical Reports Server (NTRS)
Kou, Sindo (Inventor); Lan, Chung-Wen (Inventor)
1993-01-01
Floating zone refining or crystal growth is carried out by providing rapid relative rotation of a feed rod and finish rod while providing heat to the junction between the two rods so that significant forced convection occurs in the melt zone between the two rods. The forced convection distributes heat in the melt zone to allow the rods to be melted through with a much shorter melt zone length than possible utilizing conventional floating zone processes. One of the rods can be rotated with respect to the other, or both rods can be counter-rotated, with typical relative rotational speeds of the rods ranging from 200 revolutions per minute (RPM) to 400 RPM or greater. Zone refining or crystal growth is carried out by traversing the melt zone through the feed rod.
Prioritization and Refinement of Clinical Data Elements within EHR Systems
Collins, Sarah A; Gesner, Emily; Mar, Perry L.; Colburn, Doreen M.; Rocha, Roberto A.
2016-01-01
Standardization of clinical data element (CDE) definitions is foundational to track, interpret, and analyze patient states, populations, and costs across providers, settings and time – critical activities to achieve the Triple Aim: improving the experience of care, improving the health of populations, and reducing per capita healthcare costs. We defined and implemented two analytical methods to prioritize and refine CDE definitions within electronic health records (EHRs), taking into account resource restrictions to carry out the analysis and configuration changes: 1) analysis of downstream data needs to identify high priority clinical topics, and 2) gap analysis of EHR CDEs when compared to reference models for the same clinical topics. We present use cases for six clinical topics. Pain Assessment and Skin Alteration Assessment were topics with the highest regulatory and non-regulatory downstream data needs and with significant gaps across documention artifacts in our system, confirming that these topics should be refined first. PMID:28269837
Measuring Coalition Functioning: Refining Constructs through Factor Analysis
Brown, Louis D.; Feinberg, Mark E.; Greenberg, Mark T.
2013-01-01
Internal and external coalition functioning is an important predictor of coalition success that has been linked to perceived coalition effectiveness, coalition goal achievement, coalition ability to support evidence-based programs, and coalition sustainability. Understanding which aspects of coalition functioning best predict coalition success requires the development of valid measures of empirically unique coalition functioning constructs. The goal of the present study is to examine and refine the psychometric properties of coalition functioning constructs in the following six domains: leadership, interpersonal relationships, task focus, participation benefits/costs, sustainability planning, and community support. We used factor analysis to identify problematic items in our original measure and then piloted new items and scales to create a more robust, psychometrically sound, multidimensional measure of coalition functioning. Scales displayed good construct validity through correlations with other measures. Discussion considers the strengths and weaknesses of the refined instrument. PMID:22193112
Facade model refinement by fusing terrestrial laser data and image
NASA Astrophysics Data System (ADS)
Liu, Yawen; Qin, Sushun
2015-12-01
The building facade model is one of main landscapes of a city and basic data of city geographic information. It is widely useful in accurate path planning, real navigation through the urban environment, location-based application, etc. In this paper, a method of facade model refinement by fusing terrestrial laser data and image is presented. It uses the matching of model edge and image line combined with laser data verification and effectively refines facade geometry model that reconstructed from laser data. The laser data of geometric structures on building facade such as window, balcony and door are segmented, and used as a constraint for further selecting the optical model edges that are located at the cross-line of point data and no data. The results demonstrate the deviation of model edges caused by laser sampling interval can be removed in the proposed method.
China expands refining sector to handle booming oil demand
Not Available
1993-05-10
China's refining sector is in the midst of a major expansion and reorganization in response to booming domestic demand for petroleum products. Plans call for hiking crude processing capacity to 3.9 million b/d in 1995 from the current 3.085 million b/d. Much of that 26% increase will come where the products demand growth is the strongest: China's coastal provinces, notably those in the southeast. Despite the demand surge, China's refineries operated at only 74% of capacity in 1991, and projections for 1992 weren't much better. Domestic crude supply is limited because of Beijing's insistence on maintaining crude export levels, a major source of hard currency foreign exchange. The paper discusses the superheated demand; exports and imports; the refining infrastructure; the Shenzhen refinery; Hong Kong demand; southeast coast demand; 1993 plans; and foreign investment.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation
Decadal climate prediction with a refined anomaly initialisation approach
NASA Astrophysics Data System (ADS)
Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.; Hawkins, Ed; Nichols, Nancy K.
2017-03-01
In decadal prediction, the objective is to exploit both the sources of predictability from the external radiative forcings and from the internal variability to provide the best possible climate information for the next decade. Predicting the climate system internal variability relies on initialising the climate model from observational estimates. We present a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following key innovations: (1) the use of a weight applied to the observed anomalies, in order to avoid the risk of introducing anomalies recorded in the observed climate, whose amplitude does not fit in the range of the internal variability generated by the model; (2) the AI of the ocean density, instead of calculating it from the anomaly initialised state of temperature and salinity. An experiment initialised with this refined AI method has been compared with a full field and standard AI experiment. Results show that the use of such refinements enhances the surface temperature skill over part of the North and South Atlantic, part of the South Pacific and the Mediterranean Sea for the first forecast year. However, part of such improvement is lost in the following forecast years. For the tropical Pacific surface temperature, the full field initialised experiment performs the best. The prediction of the Arctic sea-ice volume is improved by the refined AI method for the first three forecast years and the skill of the Atlantic multidecadal oscillation is significantly increased compared to a non-initialised forecast, along the whole forecast time.
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
REFINING AND END USE STUDY OF COAL LIQUIDS
Unknown
2002-01-01
This document summarizes all of the work conducted as part of the Refining and End Use Study of Coal Liquids. There were several distinct objectives set, as the study developed over time: (1) Demonstration of a Refinery Accepting Coal Liquids; (2) Emissions Screening of Indirect Diesel; (3) Biomass Gasification F-T Modeling; and (4) Updated Gas to Liquids (GTL) Baseline Design/Economic Study.
French refiners grappling with new octane specs, environmental rules
Not Available
1991-11-18
After emerging from the doldrums of the 1980s, France's refining industry faces new challenges to meet tightening gasoline and diesel specifications and greater environmental pressures. This paper reports on three stage investment program that is under way to adapt the pared down and restructured plant network to satisfy a changing products market. Current outlays, involving the first stage of investments, are geared to the rapidly developing unleaded gasoline market to meet quantity and quality requirements.
Dietary Refinements in a Sensitive Fish Liver Tumor Model
1991-12-20
Composition of Purified Casein (PC) diet for medaka INGREDIENTS PERCENT COMPOSITION Vitamin- free casein 31.0 Wheat gluten 15.0 Dextrin 27.2 Refined soy...and plankton of unknown source and variability and may contribute confounding environmental contaminants . Since these diets are flaked, attempts to add...with a given dose of initiating carcinogen. Furthermore, since dietary contaminants are important modulators of carcinogenesis, a highly purified
A Precision Recursive Estimate for Ephemeris Refinement (PREFER)
NASA Technical Reports Server (NTRS)
Gibbs, B.
1980-01-01
A recursive filter/smoother orbit determination program was developed to refine the ephemerides produced by a batch orbit determination program (e.g., CELEST, GEODYN). The program PREFER can handle a variety of ground and satellite to satellite tracking types as well as satellite altimetry. It was tested on simulated data which contained significant modeling errors and the results clearly demonstrate the superiority of the program compared to batch estimation.
Decadal climate prediction with a refined anomaly initialisation approach
NASA Astrophysics Data System (ADS)
Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.; Hawkins, Ed; Nichols, Nancy K.
2016-06-01
In decadal prediction, the objective is to exploit both the sources of predictability from the external radiative forcings and from the internal variability to provide the best possible climate information for the next decade. Predicting the climate system internal variability relies on initialising the climate model from observational estimates. We present a refined method of anomaly initialisation (AI) applied to the ocean and sea ice components of the global climate forecast model EC-Earth, with the following key innovations: (1) the use of a weight applied to the observed anomalies, in order to avoid the risk of introducing anomalies recorded in the observed climate, whose amplitude does not fit in the range of the internal variability generated by the model; (2) the AI of the ocean density, instead of calculating it from the anomaly initialised state of temperature and salinity. An experiment initialised with this refined AI method has been compared with a full field and standard AI experiment. Results show that the use of such refinements enhances the surface temperature skill over part of the North and South Atlantic, part of the South Pacific and the Mediterranean Sea for the first forecast year. However, part of such improvement is lost in the following forecast years. For the tropical Pacific surface temperature, the full field initialised experiment performs the best. The prediction of the Arctic sea-ice volume is improved by the refined AI method for the first three forecast years and the skill of the Atlantic multidecadal oscillation is significantly increased compared to a non-initialised forecast, along the whole forecast time.
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
Deformation Banding and Grain Refinement in FCC Materials
2003-03-01
Chips of AZ91 Magnesium and Mechanical Properties of Extruded Bars” Materials Transactions JIM, vol. 36, pp. 1249-1254, 1995. 95. H. Watanabe, K...material properties . For most of this history the mechanisms involved during deformation and annealing were not known and understanding of these...to this study will be provided. Grain size is an important factor in the mechanical properties of materials. Generally, refinement of the grain
Surgical technique refinements in head and neck oncologic surgery.
Liu, Jeffrey C; Shah, Jatin P
2010-06-15
The head and neck region poses a challenging arena for oncologic surgery. Diseases and their treatment can affect a myriad of functions, including sight, hearing, taste, smell, breathing, speaking, swallowing, facial expression, and appearance. This review discusses several areas where refinements in surgical techniques have led to improved patient outcomes. This includes surgical incisions, neck lymphadenectomy, transoral laser microsurgery, minimally invasive thyroid surgery, and the use of vascularized free flaps for oromandibular reconstruction.
Refining the Evaluation of Uncertainties in [UTC - UTC (k)
2005-08-01
different contributions, mainly the time transfer equipment of the laboratories that are being used for different links may correlate the results. To...average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...Symposium and 37th Precise Time and Time Interval (PTTI) Systems and Applications Meeting, 29-31 Aug 2005, Vancouver, BC, Canada 14. ABSTRACT We refine the
JT9D ceramic outer air seal system refinement program
NASA Technical Reports Server (NTRS)
Gaffin, W. O.
1982-01-01
The abradability and durability characteristics of the plasma sprayed system were improved by refinement and optimization of the plasma spray process and the metal substrate design. The acceptability of the final seal system for engine testing was demonstrated by an extensive rig test program which included thermal shock tolerance, thermal gradient, thermal cycle, erosion, and abradability tests. An interim seal system design was also subjected to 2500 endurance test cycles in a JT9D-7 engine.
Applying Doubly Labeled Transition Systems to the Refinement Paradox
2005-09-01
Schmidt. Binary relations for abstraction and refinement. Technical Report KSU Report 2000-3 .8, Kansas State University, 2000. [6] Joseph Goguen and...abstraction. Artificial Intelli- gence, 57(2-3):323–390, 1992. [48] B Davey and H Priestley . Introduction To Lattices and Order. University of Oxford, 2nd...1977. [66] Joseph Goguen and J. Meseguer. Interference control and unwinding. In IEEE Symposium on Security and Privacy, pages 75–86, Oakland, CA, April
Decontamination of steel by melt refining: A literature review
Ozturk, B.; Fruehan, R.J.
1994-12-31
It has been reported that a large amount of metal waste is produced annually by nuclear fuel processing and nuclear power plants. These metal wastes are contaminated with radioactive elements, such as uranium and plutonium. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain level. Because of high cost, it is important to develop an effective decontamination and volume reduction method for low level contaminated metals. It has been shown by some investigators that a melt refining technique can be used for the processing of the contaminated metal wastes. In this process, contaminated metal is melted wit a suitable flux. The radioactive elements are oxidized and transferred to a slag phase. In order to develop a commercial process it is important to have information on the thermodynamics and kinetics of the removal. Therefore, a literature search was carried out to evaluate the available information on the decontamination uranium and transuranic-contaminated plain steel, copper and stainless steel by melt a refining technique. Emphasis was given to the thermodynamics and kinetics of the removal. Data published in the literature indicate that it is possible to reduce the concentration of radioactive elements to a very low level by the melt refining method. 20 refs.
The state of animal welfare in the context of refinement.
Zurlo, Joanne; Hutchinson, Eric
2014-01-01
The ultimate goal of the Three Rs is the full replacement of animals used in biomedical research and testing. However, replacement is unlikely to occur in the near future; therefore the scientific community as a whole must continue to devote considerable effort to ensure optimal animal welfare for the benefit of the science and the animals, i.e., the R of refinement. Laws governing the care and use of laboratory animals have recently been revised in Europe and the US and these place greater emphasis on promoting the well-being of the animals in addition to minimizing pain and distress. Social housing for social species is now the default condition, which can present a challenge in certain experimental settings and for certain species. The practice of positive reinforcement training of laboratory animals, particularly non-human primates, is gathering momentum but is not yet universally employed. Enhanced consideration of refinement extends to rodents, particularly mice, whose use is still increasing as more genetically modified models are generated. The wastage of extraneous mice and the method of their euthanasia are refinement issues that still need to be addressed. An international, concerted effort into defining the needs of laboratory animals is still necessary to improve the quality of the animal models used as well as their welfare.
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Shading-based DEM refinement under a comprehensive imaging model
NASA Astrophysics Data System (ADS)
Peng, Jianwei; Zhang, Yi; Shan, Jie
2015-12-01
This paper introduces an approach to refine coarse digital elevation models (DEMs) based on the shape-from-shading (SfS) technique using a single image. Different from previous studies, this approach is designed for heterogeneous terrain and derived from a comprehensive (extended) imaging model accounting for the combined effect of atmosphere, reflectance, and shading. To solve this intrinsic ill-posed problem, the least squares method and a subsequent optimization procedure are applied in this approach to estimate the shading component, from which the terrain gradient is recovered with a modified optimization method. Integrating the resultant gradients then yields a refined DEM at the same resolution as the input image. The proposed SfS method is evaluated using 30 m Landsat-8 OLI multispectral images and 30 m SRTM DEMs. As demonstrated in this paper, the proposed approach is able to reproduce terrain structures with a higher fidelity; and at medium to large up-scale ratios, can achieve elevation accuracy 20-30% better than the conventional interpolation methods. Further, this property is shown to be stable and independent of topographic complexity. With the ever-increasing public availability of satellite images and DEMs, the developed technique is meaningful for global or local DEM product refinement.
Refinement of Atomic Structures Against cryo-EM Maps.
Murshudov, G N
2016-01-01
This review describes some of the methods for atomic structure refinement (fitting) against medium/high-resolution single-particle cryo-EM reconstructed maps. Some of the tools developed for macromolecular X-ray crystal structure analysis, especially those encapsulating prior chemical and structural information can be transferred directly for fitting into cryo-EM maps. However, despite the similarities, there are significant differences between data produced by these two techniques; therefore, different likelihood functions linking the data and model must be used in cryo-EM and crystallographic refinement. Although tools described in this review are mostly designed for medium/high-resolution maps, if maps have sufficiently good quality, then these tools can also be used at moderately low resolution, as shown in one example. In addition, the use of several popular crystallographic methods is strongly discouraged in cryo-EM refinement, such as 2Fo-Fc maps, solvent flattening, and feature-enhanced maps (FEMs) for visualization and model (re)building. Two problems in the cryo-EM field are overclaiming resolution and severe map oversharpening. Both of these should be avoided; if data of higher resolution than the signal are used, then overfitting of model parameters into the noise is unavoidable, and if maps are oversharpened, then at least parts of the maps might become very noisy and ultimately uninterpretable. Both of these may result in suboptimal and even misleading atomic models.
The role of optimization in structural model refinement
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1984-01-01
To evaluate the role that optimization can play in structural model refinement, it is necessary to examine the existing environment for the structural design/structural modification process. The traditional approach to design, analysis, and modification is illustrated. Typically, a cyclical path is followed in evaluating and refining a structural system, with parallel paths existing between the real system and the analytical model of the system. The major failing of the existing approach is the rather weak link of communication between the cycle for the real system and the cycle for the analytical model. Only at the expense of much human effort can data sharing and comparative evaluation be enhanced for the two parallel cycles. Much of the difficulty can be traced to the lack of a user-friendly, rapidly reconfigurable engineering software environment for facilitating data and information exchange. Until this type of software environment becomes readily available to the majority of the engineering community, the role of optimization will not be able to reach its full potential and engineering productivity will continue to suffer. A key issue in current engineering design, analysis, and test is the definition and development of an integrated engineering software support capability. The data and solution flow for this type of integrated engineering analysis/refinement system is shown.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Optimization Algorithms in Optimal Predictions of Atomistic Properties by Kriging.
Di Pasquale, Nicodemo; Davie, Stuart J; Popelier, Paul L A
2016-04-12
The machine learning method kriging is an attractive tool to construct next-generation force fields. Kriging can accurately predict atomistic properties, which involves optimization of the so-called concentrated log-likelihood function (i.e., fitness function). The difficulty of this optimization problem quickly escalates in response to an increase in either the number of dimensions of the system considered or the size of the training set. In this article, we demonstrate and compare the use of two search algorithms, namely, particle swarm optimization (PSO) and differential evolution (DE), to rapidly obtain the maximum of this fitness function. The ability of these two algorithms to find a stationary point is assessed by using the first derivative of the fitness function. Finally, the converged position obtained by PSO and DE is refined through the limited-memory Broyden-Fletcher-Goldfarb-Shanno bounded (L-BFGS-B) algorithm, which belongs to the class of quasi-Newton algorithms. We show that both PSO and DE are able to come close to the stationary point, even in high-dimensional problems. They do so in a reasonable amount of time, compared to that with the Newton and quasi-Newton algorithms, regardless of the starting position in the search space of kriging hyperparameters. The refinement through L-BFGS-B is able to give the position of the maximum with whichever precision is desired.
Phase unwrapping using an extrapolation-projection algorithm
NASA Astrophysics Data System (ADS)
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
Phase unwrapping using an extrapolation-projection algorithm.
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
An Improved Snake Model for Refinement of Lidar-Derived Building Roof Contours Using Aerial Images
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Shugen; Liu, Xiuguo
2016-06-01
Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas) has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Lee, Taehun
2013-11-01
A novel adaptive mesh refinement (AMR) algorithm for the numerical solution of fluid flow problems is presented in this study. The proposed AMR algorithm can be used to solve partial differential equations including, but not limited to, the Navier-Stokes equations using an AMR technique. Here, the lattice Boltzmann method (LBM) is employed as a substitute of the nearly incompressible Navier-Stokes equations. Besides its simplicity, the proposed AMR algorithm is straightforward and yet efficient. The idea is to remove the need for a tree-type data structure by using the pointer attributes in a unique way, along with an appropriate adjustment of the child block's IDs, to determine the neighbors of a certain block. Thanks to the unique way of invoking pointers, there is no need to construct a quad-tree (in 2D) or oct-tree (in 3D) data structure for maintaining the connectivity data between different blocks. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with a clean and efficient algorithm that is easier to implement and use on parallel machines. Several benchmark studies are carried out to assess the accuracy and efficiency of the proposed AMR-LBM, including lid-driven cavity flow, vortex shedding past a square cylinder, and Kelvin-Helmholtz instability for single-phase and multiphase fluids.
U.S. Settles with Gasoline Refiner to Reduce Emissions at Utah Facility
WASHINGTON -- The U.S. Environmental Protection Agency (EPA) and the Department of Justice today announced a settlement with HollyFrontier Corporation subsidiaries (HollyFrontier Refining & Marketing LLC, Frontier El Dorado Refining, LLC, Holly
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
40 CFR 80.1339 - Who is not eligible for the provisions for small refiners?
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Small Refiner... section, the refiner may not generate gasoline benzene credits under § 80.1275(b)(3) for any of...
40 CFR 80.1342 - What compliance options are available to small refiners under this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Benzene Small Refiner Provisions § 80.1342 What compliance options are available to small refiners under... this section must comply with the applicable benzene standards at § 80.1230 beginning with the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-08
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE TREASURY Internal Revenue Service Credit for Renewable Electricity Production, Refined Coal Production, and Indian... availability of the credit for renewable electricity production, refined coal production, and Indian...
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
Software For Genetic Algorithms
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
40 CFR 80.1335 - Can a refiner seek relief from the requirements of this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Can a refiner seek relief from the requirements of this subpart? 80.1335 Section 80.1335 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 80.1335 Can a refiner seek relief from the requirements of this subpart? (a) A refiner may...
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
40 CFR 421.10 - Applicability; description of the bauxite refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... bauxite refining subcategory. 421.10 Section 421.10 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Bauxite Refining Subcategory § 421.10 Applicability; description of the bauxite refining... bauxite to alumina by the Bayer process or by the combination process....
76 FR 61074 - USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-03
... Office of the Secretary USDA Increases the Fiscal Year 2011 Tariff-Rate Quota for Refined Sugar AGENCY... increase in the fiscal year (FY) 2011 refined sugar tariff-rate quota (TRQ) of 136,078 metric tons raw... MTRV for sugars, syrups, and molasses (collectively referred to as refined sugar) described...
78 FR 25415 - Waivers Under the Refined Sugar Re-Export Program
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-01
... Office of the Secretary Waivers Under the Refined Sugar Re-Export Program AGENCY: Office of the Secretary... waiving certain provisions in the Refined Sugar Re-Export Program, effective today. These actions are authorized under the waiver authority for the Refined Sugar Re-Export Program regulation at 7 CFR...
76 FR 61472 - Revised Fiscal Year 2011 Tariff-Rate Quota Allocations for Refined Sugar
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-04
... TRADE REPRESENTATIVE Revised Fiscal Year 2011 Tariff-Rate Quota Allocations for Refined Sugar AGENCY... the fiscal year (FY) 2011 in-quota quantity of the tariff-rate quota (TRQ) for imported refined sugar... imports of refined sugar. Section 404(d)(3) of the Uruguay Round Agreements Act (19 U.S.C....
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2014 CFR
2014-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... § 80.1142. (b)(1) The small refiner exemption in paragraph (c) of this section is effective immediately... submitted under subpart K (§ 80.1142) prior to July 1, 2010 that satisfy the requirements of subpart K...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2012 CFR
2012-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... § 80.1142. (b)(1) The small refiner exemption in paragraph (c) of this section is effective immediately... submitted under subpart K (§ 80.1142) prior to July 1, 2010 that satisfy the requirements of subpart K...
40 CFR 80.1442 - What are the provisions for small refiners under the RFS program?
Code of Federal Regulations, 2013 CFR
2013-07-01
... “refiner” shall include foreign refiners. (3) Refiners who qualified as small under 40 CFR 80.1142 do not... § 80.1142. (b)(1) The small refiner exemption in paragraph (c) of this section is effective immediately... submitted under subpart K (§ 80.1142) prior to July 1, 2010 that satisfy the requirements of subpart K...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
40 CFR 421.50 - Applicability: Description of the primary electrolytic copper refining subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... primary electrolytic copper refining subcategory. 421.50 Section 421.50 Protection of Environment... POINT SOURCE CATEGORY Primary Electrolytic Copper Refining Subcategory § 421.50 Applicability: Description of the primary electrolytic copper refining subcategory. The provisions of this subpart apply...
75 FR 33330 - Seamless Refined Copper Pipe and Tube From China and Mexico
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-11
... COMMISSION Seamless Refined Copper Pipe and Tube From China and Mexico AGENCY: International Trade Commission... imports from China and Mexico of seamless refined copper pipe and tube, provided for in subheadings 7411... circular refined copper pipe and tubes, including redraw hollows, greater than or equal to 6 inches...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... International Trade Administration Seamless Refined Copper Pipe and Tube From Mexico: Correction to Notice of... Department'') published in the Federal Register the following notice: Seamless Refined Copper Pipe and Tube... included in the Initiation Notice. See Seamless Refined Copper Pipe and Tube From the People's Republic...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-12
... International Trade Administration Seamless Refined Copper Pipe and Tube from Mexico: Final Results of... antidumping duty order on seamless refined copper tube and pipe from Mexico.\\1\\ This review covers two... normal value. \\1\\ See Seamless Refined Copper Pipe and Tube From Mexico: Preliminary Results...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... International Trade Administration Seamless Refined Copper Pipe and Tube From Mexico: Preliminary Results of... refined copper pipe and tube from Mexico.\\1\\ The review covers two producers/ exporters of the subject... are invited to comment on these preliminary results. \\1\\ See Seamless Refined Copper Pipe and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-10
... International Trade Administration Seamless Refined Copper Pipe and Tube From Mexico: Preliminary Results of... administrative review of the antidumping duty order on seamless refined copper pipe and tube from Mexico. The... refined copper pipe and tube. The product is currently classified under the Harmonized Tariff Schedule...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-13
... From the Petroleum Refining Industry Processed in a Gasification System To Produce Synthesis Gas; Final... Petroleum Refining Industry Processed in a Gasification System To Produce Synthesis Gas,'' published in the... From the Petroleum Refining Industry Processed in a Gasification System To Produce Synthesis...
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Quantum algorithms: an overview
NASA Astrophysics Data System (ADS)
Montanaro, Ashley
2016-01-01
Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
NASA Astrophysics Data System (ADS)
Graf, Norman A.
2001-07-01
An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.
Comparison of local grid refinement methods for MODFLOW.
Mehl, Steffen; Hill, Mary C; Leake, Stanley A
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions.
Optical CD metrology model evaluation and refining for manufacturing
NASA Astrophysics Data System (ADS)
Wang, S.-B.; Huang, C. L.; Chiu, Y. H.; Tao, H. J.; Mii, Y. J.
2009-03-01
Optical critical dimension (OCD) metrology has been well-accepted as standard inline metrology tool in semiconductor manufacturing since 65nm technology node for its un-destructive and versatile advantage. Many geometry parameters can be obtained in a single measurement with good accuracy if model is well established and calibrated by transmission electron microscopy (TEM). However, in the viewpoint of manufacturing, there is no effective index for model quality and, based on that, for model refining. Even, when device structure becomes more complicated, like strained silicon technology, there are more parameters required to be determined in the afterward measurement. The model, therefore, requires more attention to be paid to ensure inline metrology reliability. GOF (goodness-of-fitting), one model index given by a commercial OCD metrology tool, for example, is not sensitive enough while correlation and sensitivity coefficient, the other two indexes, are evaluated under metrology tool noise only and not directly related to inline production measurement uncertainty. In this article, we will propose a sensitivity matrix for measurement uncertainty estimation in which each entry is defined as the correlation coefficient between the corresponding two floating parameters and obtained by linearization theorem. The uncertainty is estimated in combination of production line variation and found, for the first time, much larger than that by metrology tool noise alone that indicates model quality control is critical for nanometer device production control. The uncertainty, in comparison with production requirement, also serves as index for model refining either by grid size rescaling or structure model modification. This method is verified by TEM measurement and, in the final, a flow chart for model refining is proposed.
Comparison of local grid refinement methods for MODFLOW
Mehl, S.; Hill, M.C.; Leake, S.A.
2006-01-01
Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).
Crystal chemistry and structure refinement of five hydrated calcium borates
Clark, J.R.; Appleman, D.E.; Christ, C.L.
1964-01-01
The crystal structures of the five known members of the series Ca2B6O11??xH2O (x = 1, 5, 5, 7, 9, and 13) have been refined by full-matrix least-squares techniques, yielding bond distances and angles with standard errors of less than 0??01 A?? and 0??5??, respectively. The results illustrate the crystal chemical principles that govern the structures of hydrated borate compounds. The importance of hydrogen bonding in the ferroelectric transition of colemanite is confirmed by more accurate proton assignments. ?? 1964.
Hydrodynamic Characteristics of an Aerodynamically Refined Planing-Tail Hull
NASA Technical Reports Server (NTRS)
McKann, Robert; Suydam, Henry B.
1948-01-01
The hydrodynamic characteristics of an aerodynamically refined planing-tail hull were determined from dynamic model tests in Langley tank no. 2. Stable take-off could be made for a wide range of locations of the center of gravity. The lower porpoising limit peak was high, but no upper limit was encountered. Resistance was high, being about the same as that of float seaplanes. A reasonable range of trims for stable landings was available only in the aft range of center-of-gravity locations.
Validating, augmenting and refining genome-wide association signals.
Ioannidis, John P A; Thomas, Gilles; Daly, Mark J
2009-05-01
Studies using genome-wide platforms have yielded an unprecedented number of promising signals of association between genomic variants and human traits. This Review addresses the steps required to validate, augment and refine such signals to identify underlying causal variants for well-defined phenotypes. These steps include: large-scale exact replication across both similar and diverse populations; fine mapping and resequencing; determination of the most informative markers and multiple independent informative loci; incorporation of functional information; and improved phenotype mapping of the implicated genetic effects. Even in cases for which replication proves that an effect exists, confident localization of the causal variant often remains elusive.
Refinement of Phobos Ephemeris Using Mars Orbiter Laser Altimeter Radiometry
NASA Technical Reports Server (NTRS)
Neumann, G. A.; Bills, B. G.; Smith, D. E.; Zuber, M. T.
2004-01-01
Radiometric observations from the Mars Orbiter Laser Altimeter (MOLA) can be used to improve the ephemeris of Phobos, with particular interest in refining estimates of the secular acceleration due to tidal dissipation within Mars. We have searched the Mars Orbiter Laser Altimeter (MOLA) radiometry data for shadows cast by the moon Phobos, finding 7 such profiles during the Mapping and Extended Mission phases, and 5 during the last two years of radiometry operations. Preliminary data suggest that the motion of Phobos has advanced by one or more seconds beyond that predicted by the current ephemerides, and the advance has increased over the 5 years of Mars Global Surveyor (MGS) operations.
DSR: enhanced modelling and refinement of disordered structures with SHELXL
Kratzert, Daniel; Holstein, Julian J.; Krossing, Ingo
2015-01-01
One of the remaining challenges in single-crystal structure refinement is the proper description of disorder in crystal structures. This paper describes a computer program that performs semi-automatic modelling of disordered moieties in SHELXL [Sheldrick (2015 ▶). Acta Cryst. C71, 3–8.]. The new program contains a database that includes molecular fragments and their corresponding stereochemical restraints, and a placement procedure to place these fragments on the desired position in the unit cell. The program is also suitable for speeding up model building of well ordered crystal structures. PMID:26089767
Refining and separation of crude tall-oil components
Nogueira, J.M.F.
1996-10-01
Methods for crude tall-oil refining and fractionation evolving research studies of long-chain fatty and resinic acids separation are reviewed. Although several techniques have been applied since the 1940s with industrial aims, only distillation under high vacuum is economically practicable for crude tall-oil fractionation. Techniques such as adsorption and dissociation extraction seem to be the most industrially promising for implementation in the future for the separation of long-chain fatty and resinic acids fractions with a high purity level at low cost.
Recent refinements to cranial implants for rhesus macaques (Macaca mulatta).
Johnston, Jessica M; Cohen, Yale E; Shirley, Harry; Tsunada, Joji; Bennur, Sharath; Christison-Lagay, Kate; Veeder, Christin L
2016-05-01
The advent of cranial implants revolutionized primate neurophysiological research because they allow researchers to stably record neural activity from monkeys during active behavior. Cranial implants have improved over the years since their introduction, but chronic implants still increase the risk for medical complications including bacterial contamination and resultant infection, chronic inflammation, bone and tissue loss and complications related to the use of dental acrylic. These complications can lead to implant failure and early termination of study protocols. In an effort to reduce complications, we describe several refinements that have helped us improve cranial implants and the wellbeing of implanted primates.
Petroleum mineral oil refining and evaluation of cancer hazard.
Mackerer, Carl R; Griffis, Larry C; Grabowski Jr, John S; Reitman, Fred A
2003-11-01
Petroleum base oils (petroleum mineral oils) are manufactured from crude oils by vacuum distillation to produce several distillates and a residual oil that are then further refined. Aromatics including alkylated polycyclic aromatic compounds (PAC) are undesirable constituents of base oils because they are deleterious to product performance and are potentially carcinogenic. In modern base oil refining, aromatics are reduced by solvent extraction, catalytic hydrotreating, or hydrocracking. Chronic exposure to poorly refined base oils has the potential to cause skin cancer. A chronic mouse dermal bioassay has been the standard test for estimating carcinogenic potential of mineral oils. The level of alkylated 3-7-ring PAC in raw streams from the vacuum tower must be greatly reduced to render the base oil noncarcinogenic. The processes that can reduce PAC levels are known, but the operating conditions for the processing units (e.g., temperature, pressure, catalyst type, residence time in the unit, unit engineering design, etc.) needed to achieve adequate PAC reduction are refinery specific. Chronic dermal bioassays provide information about whether conditions applied can make a noncarcinogenic oil, but cannot be used to monitor current production for quality control or for conducting research or developing new processes since this test takes at least 78 weeks to conduct. Three short-term, non-animal assays all involving extraction of oil with dimethylsulfoxide (DMSO) have been validated for predicting potential carcinogenic activity of petroleum base oils: a modified Ames assay of a DMSO extract, a gravimetric assay (IP 346) for wt. percent of oil extracted into DMSO, and a GC-FID assay measuring 3-7-ring PAC content in a DMSO extract of oil, expressed as percent of the oil. Extraction with DMSO concentrates PAC in a manner that mimics the extraction method used in the solvent refining of noncarcinogenic oils. The three assays are described, data demonstrating the
Creating elegance and refinement at the nasal tip.
Quatela, Vito C; Kolstad, Christopher K
2012-04-01
Enhancing nasal tip definition requires a three-dimensional approach encompassing both form and function. Dome refinements achieved during surgery should be created with sufficient integrity to withstand postoperative healing forces. Stabilizing the nasal base is the first component of dome alterations and prevents loss of tip rotation and projection. Structural grafting can be used to enhance tip definition and at the same time adds support to the cartilaginous framework. Tip shield grafts camouflage dome asymmetries, establish the tip-defining point, and enhance the supratip break. Shield grafts can be placed in all skin types with appropriate contouring and camouflaging techniques.
Unsupervised classification algorithm based on EM method for polarimetric SAR images
NASA Astrophysics Data System (ADS)
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Toughening and strengthening of ceramics composite through microstructural refinement
NASA Astrophysics Data System (ADS)
Anggraini, Lydia; Isonishi, Kazuo; Ameyama, Kei
2016-04-01
Silicon carbide with 50 mass% zirconia ceramic matrix composites were processed by mechanical milling (MM) followed by spark plasma sintering (SPS). By controlling the parameters of MM and SPS, an ultra-fine ZrO2 grain was homogeneously dispersed and refined on the surface of a fine SiC powder, forming a harmonic microstructure. The mechanical properties and the densification behavior of the SiC-ZrO2 composites were investigated. The effects of the milling time on the microstructure and on the mechanical properties of the composite are discussed. The results indicate that the composite mechanically milled for 144 ks and sintered at 1773 K had the highest relative density of 98 %, along with a fracture toughness of 10.7 MPa.m1/2 and a bending strength of 1128 MPa. These superior mechanical properties were influenced by the microstructure characteristics such as the homogeneous grain dispersion. Thus, the microstructural refinement forming harmonic dispersion can be considered a remarkable design tool for improving the mechanical properties of SiC-ZrO2, as well as other ceramic composite materials.
Coarse Grained Model for Biological Simulations: Recent Refinements and Validation
Vicatos, Spyridon; Rychkova, Anna; Mukherjee, Shayantani; Warshel, Arieh
2014-01-01
Exploring the free energy landscape of proteins and modeling the corresponding functional aspects presents a major challenge for computer simulation approaches. This challenge is due to the complexity of the landscape and the enormous computer time needed for converging simulations. The use of various simplified coarse grained (CG) models offers an effective way of sampling the landscape, but most current models are not expected to give a reliable description of protein stability and functional aspects. The main problem is associated with insufficient focus on the electrostatic features of the model. In this respect our recent CG model offers significant advantage as it has been refined while focusing on its electrostatic free energy. Here we review the current state of our model, describing recent refinement, extensions and validation studies while focusing on demonstrating key applications. These include studies of protein stability, extending the model to include membranes and electrolytes and electrodes as well as studies of voltage activated proteins, protein insertion trough the translocon, the action of molecular motors and even the coupling of the stalled ribosome and the translocon. Our example illustrates the general potential of our approach in overcoming major challenges in studies of structure function correlation in proteins and large macromolecular complexes. PMID:25050439
Cosmological fluid mechanics with adaptively refined large eddy simulations
NASA Astrophysics Data System (ADS)
Schmidt, W.; Almgren, A. S.; Braun, H.; Engels, J. F.; Niemeyer, J. C.; Schulz, J.; Mekuria, R. R.; Aspden, A. J.; Bell, J. B.
2014-06-01
We investigate turbulence generated by cosmological structure formation by means of large eddy simulations using adaptive mesh refinement. In contrast to the widely used implicit large eddy simulations, which resolve a limited range of length-scales and treat the effect of turbulent velocity fluctuations below the grid scale solely by numerical dissipation, we apply a subgrid-scale model for the numerically unresolved fraction of the turbulence energy. For simulations with adaptive mesh refinement, we utilize a new methodology that allows us to adjust the scale-dependent energy variables in such a way that the sum of resolved and unresolved energies is globally conserved. We test our approach in simulations of randomly forced turbulence, a gravitationally bound cloud in a wind, and the Santa Barbara cluster. To treat inhomogeneous turbulence, we introduce an adaptive Kalman filtering technique that separates turbulent velocity fluctuations on resolved length-scales from the non-turbulent bulk flow. From the magnitude of the fluctuating component and the subgrid-scale turbulence energy, a total turbulent velocity dispersion of several 100 km s-1 is obtained for the Santa Barbara cluster, while the low-density gas outside the accretion shocks is nearly devoid of turbulence. The energy flux through the turbulent cascade and the dissipation rate predicted by the subgrid-scale model correspond to dynamical time-scales around 5 Gyr, independent of numerical resolution.
Refinement of the urine concentration test in rats.
Kulick, Lisa J; Clemons, Donna J; Hall, Robert L; Koch, Michael A
2005-01-01
The urine concentration test is a potentially stressful procedure used to assess renal function. Historically, animals have been deprived of water for 24 h or longer during this test, creating the potential for distress. Refinement of the technique to lessen distress may involve decreasing the water-deprivation period. To determine the feasibility of reduced water-deprivation time, 10 male and 10 female rats were food- and water-deprived for 22 h. Clinical condition and body weights were recorded, and urine was collected every 2 h, beginning 16 h after the onset of food and water deprivation. All rats lost weight (P < 0.001). All rats were clinically normal after 16 h, but 90% of the males and 30% of the females appeared clinically dehydrated after 22 h. After 16 h, mean urine specific gravities were 1.040 and 1.054 for males and females, respectively, and mean urine osmolalities were 1,362 and 2,080 mOsm/kg, respectively, indicating the rats were adequately concentrating urine. The rats in this study tolerated water deprivation relatively well for 16 h but showed clinical signs of dehydration after 22 h. Based on this study, it was concluded that the urine concentration test can be refined such that rats are not deprived of water for more than 16 h without jeopardizing test results.
Refinement of the crystal structure of lithium-bearing uvite
Rozhdestvenskaya, I. V. Frank-Kamenetskaya, O. V.; Kuznetsova, L. G.; Bannova, I. I.; Bronzova, Yu. M.
2007-03-15
The crystal structure of a natural calcium tourmaline, i.e., uvite with a high lithium content (0.51 au per formula (aupf) at the Y site, is refined to R = 0.019, R{sub w} = 0.020, and S = 1.11. It is shown that, in nature, there exist uvites in which the charge balance in the case where the Z site is occupied by trivalent cations is provided by the replacement of part of the divalent magnesium cations at the Y site by univalent cations, divalent calcium cations at the X site by sodium cations, and univalent anions at the W site by oxygen anions. The W site is found to be split into two sites, namely, the W1 and W11 sites (the W1-W11 distance is 0.14 A), which are partially occupied by the fluorine and oxygen anions, respectively. An analysis of the results obtained in this study and the data available in the literature on the crystal structure of uvites allows the conclusion that uvite can be considered a superspecies and that the nomenclature of this mineral group needs refinement with the use of structural data.
Comparative omics-driven genome annotation refinement: application across Yersiniae.
Schrimpe-Rutledge, Alexandra C; Jones, Marcus B; Chauhan, Sadhana; Purvine, Samuel O; Sanford, James A; Monroe, Matthew E; Brewer, Heather M; Payne, Samuel H; Ansong, Charles; Frank, Bryan C; Smith, Richard D; Peterson, Scott N; Motin, Vladimir L; Adkins, Joshua N
2012-01-01
Genome sequencing continues to be a rapidly evolving technology, yet most downstream aspects of genome annotation pipelines remain relatively stable or are even being abandoned. The annotation process is now performed almost exclusively in an automated fashion to balance the large number of sequences generated. One possible way of reducing errors inherent to automated computational annotations is to apply data from omics measurements (i.e. transcriptional and proteomic) to the un-annotated genome with a proteogenomic-based approach. Here, the concept of annotation refinement has been extended to include a comparative assessment of genomes across closely related species. Transcriptomic and proteomic data derived from highly similar pathogenic Yersiniae (Y. pestis CO92, Y. pestis Pestoides F, and Y. pseudotuberculosis PB1/+) was used to demonstrate a comprehensive comparative omic-based annotation methodology. Peptide and oligo measurements experimentally validated the expression of nearly 40% of each strain's predicted proteome and revealed the identification of 28 novel and 68 incorrect (i.e., observed frameshifts, extended start sites, and translated pseudogenes) protein-coding sequences within the three current genome annotations. Gene loss is presumed to play a major role in Y. pestis acquiring its niche as a virulent pathogen, thus the discovery of many translated pseudogenes, including the insertion-ablated argD, underscores a need for functional analyses to investigate hypotheses related to divergence. Refinements included the discovery of a seemingly essential ribosomal protein, several virulence-associated factors, a transcriptional regulator, and many hypothetical proteins that were missed during annotation.
Concrete Model Checking with Abstract Matching and Refinement
NASA Technical Reports Server (NTRS)
Pasareanu Corina S.; Peianek Radek; Visser, Willem
2005-01-01
We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.
Hydraulic refinement of an intraarterial microaxial blood pump.
Siess, T; Reul, H; Rau, G
1995-05-01
Intravascularly operating microaxial pumps have been introduced clinically proving to be useful tools for cardiac assist. However, a number of complications have been reported in literature associated with the extra-corporeal motor and the flexible drive shaft cable. In this paper, a new pump concept is presented which has been mechanically and hydraulically refined during the developing process. The drive shaft cable has been replaced by a proximally integrated micro electric motor and an extra-corporeal power supply. The conduit between pump and power supply consists of only an electrical power cable within the catheter resulting in a device which is indifferent to kinking and small curvature radii. Anticipated insertion difficulties, as a result of a large outer pump diameter, led to a two-step approach with an initial 6,4mm pump version and a secondary 5,4mm version. Both pumps meet the hydraulic requirement of at least 2.5l/min at a differential pressure of 80-100 mmHg. The hydraulic refinements necessary to achieve the anticipated goal are based on ongoing hydrodynamic studies of the flow inside the pumps. Flow visualization on a 10:1 scale model as well as on 1:1 scale pumps have yielded significant improvements in the overall hydraulic performance of the pumps. One example of this iterative developing process by means of geometrical changes on the basis of flow visualization is illustrated for the 6.4mm pump.
Refined codebook for grayscale image coding based on vector quantization
NASA Astrophysics Data System (ADS)
Hu, Yu-Chen; Chen, Wu-Lin; Tsai, Pi-Yu
2015-07-01
Vector quantization (VQ) is a commonly used technique for image compression. Typically, the common codebooks (CCBs) that are designed by using multiple training images are used in VQ. The CCBs are stored in the public websites such that their storage cost can be omitted. In addition to the CCBs, the private codebooks (PCBs) that are designed by using the image to be compressed can be used in VQ. However, calculating the bit rates (BRs) of VQ includes the storage cost of the PCBs. It is observed that some codewords in the CCB are not used in VQ. The codebook refinement process is designed to generate the refined codebook (RCB) based on the CCB of each image. To cut down the BRs, the lossless index coding process and the two-stage lossless coding process are employed to encode the index table and the RCB, respectively. Experimental results reveal that the proposed scheme (PS) achieves better image qualities than VQ with the CCBs. In addition, the PS requires less BRs than VQ with the PCBs.
One technique for refining the global Earth gravity models
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.
2017-01-01
The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.
Metal decontamination for waste minimization using liquid metal refining technology
Joyce, E.L. Jr.; Lally, B.; Ozturk, B.; Fruehan, R.J.
1993-09-01
The current Department of Energy Mixed Waste Treatment Project flowsheet indicates that no conventional technology, other than surface decontamination, exists for metal processing. Current Department of Energy guidelines require retrievable storage of all metallic wastes containing transuranic elements above a certain concentration. This project is in support of the National Mixed Low Level Waste Treatment Program. Because of the high cost of disposal, it is important to develop an effective decontamination and volume reduction method for low-level contaminated metals. It is important to be able to decontaminate complex shapes where surfaces are hidden or inaccessible to surface decontamination processes and destruction of organic contamination. These goals can be achieved by adapting commercial metal refining processes to handle radioactive and organic contaminated metal. The radioactive components are concentrated in the slag, which is subsequently vitrified; hazardous organics are destroyed by the intense heat of the bath. The metal, after having been melted and purified, could be recycled for use within the DOE complex. In this project, we evaluated current state-of-the-art technologies for metal refining, with special reference to the removal of radioactive contaminants and the destruction of hazardous organics. This evaluation was based on literature reports, industrial experience, plant visits, thermodynamic calculations, and engineering aspects of the various processes. The key issues addressed included radioactive partitioning between the metal and slag phases, minimization of secondary wastes, operability of the process subject to widely varying feed chemistry, and the ability to seal the candidate process to prevent the release of hazardous species.
Refining Diagnostic Procedures for Adults With Symptoms of ADHD.
Sibley, Margaret H; Coxe, Stefany; Molina, Brooke S G
2017-04-01
Attention deficit/hyperactivity disorder (ADHD) is a chronic disorder that afflicts individuals into adulthood. The field continues to refine diagnostic standards for ADHD in adults, complicated by the disorder's heterogeneous presentation, subjective symptoms, and overlap with other disorders. Two key diagnostic questions are from whom to collect diagnostic information and which symptoms should be contained on an adult diagnostic checklist. Using a trifactor model, Martel et al. examine these questions in a sample of adults with and without self-identified ADHD symptoms. In this response, we highlight the importance of their finding that self and informant symptom reports differ in a sample of adults who acknowledge ADHD symptoms. We also review issues that continue to face the field related to model specification, evaluating symptom utility, and sample composition, discussing how these issues influence conclusions that may be drawn from Martel et al. and similar investigations. We conclude that the article makes an important research contribution about the nature of self and informant ADHD symptom reports but emphasize that symptom checklist refinement must occur through a broad lens that considers work from a range of sample types and clinically informative analytic strategies.